August 25, 2018

Diskpart/etcher errors - Wiping a removable/SDcard on win10

Tools like etcher use diskpart 'clean' command to nuke the partitions. On windows 10 this often will fail everytime when you try to image a disk that was previously flashed with a bigger image, it seems to be 4GB+. Etcher will just give a generic error (and its written in javascript, what could go wrong!), but diskpart is the root cause.

etcher error

If you're using diskpart by hand and run clean it will say access is denied and to check the event log or such:

DiskPart has encountered an error: The request could not be performed because of
an I/O device error.
See the System Event Log for more information.

The event log will have an entry with something like this, which is not helpful

Cannot zero sectors on disk \\?\PhysicalDrive2. Error code: 5@0101000F

The trick is to zero the partition table so it stops getting hung up for who knows why. Of course, if you have a linux machine handy you can just write zeros to the whole device with dd, but you wouldn't be trying to use diskpart in that case anyway. (dd if=/dev/zero of=/dev/sd<5> bs=8M count=256)

So if you dont want to juggle machines or only have windows, there is a nice little freeware tool that will perform exactly the same thing, designed more for secure wiping disks by repeated writes, but you can do one pass and cancel it like 5% the way through, and then etcher will flash successfully.

The tool is "RoadKil's disk wipe" available here

July 21, 2018

Getting wine 3.0 working on ubuntu 18.04

Wine stable on ubuntu 18.04 seems to have broken auto-fetching gecko, which can break many installers with EULAs.

You can see this happen with wine iexplore or such, you will get an error like

Could not load wine-gecko. HTML rendering will be disabled.
0009:err:mshtml:create_document_object Failed to init Gecko, returning CLASS_E_CLASSNOTAVAILABLE

I ran into it with diabloII installers not being able to show and accept EULAs.

The site has instructions explaining that the auto-installer would just grab the msis and plop em in the correct spot, which can also be done manually. Here's a quick and dirty on how to install wine and get gecko working:

  1. Install wine if you havent with sudo apt install wine-stable
  2. sudo mdkir /usr/share/wine/gecko && cd /usr/share/wine/gecko
  3. sudo wget
  4. sudo wget
  5. verify it works with wine iexplore
May 25, 2018

Detecting simple metrics in Angular

Here is how to cover collecting metrics (or events if you prefer) such as click or touch events on DOM elements across an entire Angular2+ project.

This has the following caveats: It's designed for simple 'did the user interact' with elements type of logging, not advanced mouse trail logging and ux type of metrics. I recommend a full blown metrics product for anything that gets any more involved. Additionally, it rips through the DOM so this is not recommended on any performance sensitive elements or extremely large DOM trees.

Lastly - if you only care about input resulting in a useful action, simply log redux actions instead.

With that said here's what is needed!

  1. Rig up @HostListener for each event type we care about
  2. Grab DOM metadata programmically
  3. Attach metadata to elements we want to track
  4. Send off metadata and event info

Rigging up @HostListeners

This is probably the easiest step, and this is a really useful part of Angular that makes its very easy to capture events SPA-wide.

Lets setup listening to every click event that happens in our <body> by putting this @HostListener in our toplevel 'app' component (or similar):

NOTE: In my production code i pulled event type into an enum and recommend strongly typing all known event types you care about, but we will keep it simple here as a string 'click'.

@HostListener('body:click', ['$event'])
onBodyElementClicked(ev: Event) {
  this.reportEventMetric(, 'click');

We will create reportEventMetric later on. This will fire when anything is clicked anywhere in the body, so this is the only event handler we need!

Grabbing metadata programmically

Next we need to create a function that will look at whatever was clicked and determine if it's been tagged to keep track of.

For this we need to determine a metric attribute. In this example I'll use a data-metric attribute.

Now all we need to do is look for data-metric on the DOM element, BUT not just it, we want to walk up the visual tree incase the click was reported on a child element that we care less about.

private reportEventMetric(start: EventTarget, evType: string) {
    // this helper will call a recursive function to rip through DOM
    const metric = this.findDataMetricTag(start, evType);

    if (metric) {
        // once you have metrics, you can POST to server, export to csv,
        // stick in state, whatever. in this case lets just log em
        console.log(`${evType} metric: ${metric}`);

// here is the meat recursive function that will rip through dom elements looking
// for our data attribute
private findDataMetricTag(start: EventTarget, eventType: string): string | null {
    if (!start || !(start instanceof HTMLElement)) {
        return null;

    // in production code, i recommend pulling 'data-metric' to a const or readonly class prop
    const metricAttr = start.getAttribute('data-metric');
    return metricAttr || this.findDataMetricTag(start.parentElement, eventType);

In my first attempt, findDataMetricTag was actually 'Tags' because it would always walk up the parents and report any tags found, but this was not actually useful and more performance intensive.

Now we have a way to dig out the metric tag!

Tagging elements for metrics

This is the easiest, but most grueling last part, we will do this in the straightforward way of sticking directly in markup (better ways probably exist). Simply add the data attribute to an element for it to be reported:

<div data-metric="wizzbang-container-bot">
<!-- stuff -->

<button data-metric="do-thing-button">

Once this is done, if we click on an element we should see the tag reported in console, thats it!

Bonus route information!

It's often very useful to know which route you are currently on so you don't have to bake it into every single data-metric tag. This is easy to do with Angular's Location service. Inject it and get the current route like so:

import {Location} from '@angular/common';

// .. / .. in class constructor that has `reportEventMetric:
constructor(private _location: Location) {

// now modify reportEventMetric to print current location like so:

private reportEventMetric(start: EventTarget, evType: string) {
    // this helper will call a recursive function to rip through DOM
    const metric = this.findDataMetricTag(start, evType);

    // this will pull out the path of current route, eg '/user' from route
    const location = this._location.path();

    // once you have metrics, you can POST to server, export to csv,
    // stick in state, whatever. in this case lets just log em
    console.log(`route '${location}' ${evType} metric '${metric}'`);

And there you have it, simple metrics ready to stuff in a backing store.

May 23, 2018

the curious case of SIGPIPE

Shell interaction, signals, TTYs, jobs and the like are some of the core foundations that have been around since the birth of time sharing operating systems but much of it is left in the literal 'black box', even by developers! It can be quite complex when you start from the bottom up, but lets take a look at SIGPIPE, it allows for some interesting shell interaction that often goes unnoticed but is immediately useful.


We start this example with a source of otherwise 'endless' input for all intents and purposes. So called generators are a way to generate an infinite list of values, not unlike eg, [1..] in Haskell generators

The idea is to have something that will spew values as long as something is interested. yes is probably the most simple of such programs, designed to generate 'y' that can be piped to prompts and the such. The 'spoiler alert' here is that yes somehow knows to end its input despite consisting of an infinite loop in code.

interesting 'yes' complexity side note

yes may sound like the most simple C program imaginable. It mostly is, the guts of the OpenBSD version essentially consists of this:

for (;;)

(full sources:

However in the GNU version (in coreutils) is more optimized for performance and locale handling, and ends up surprisingly different, with local buffering ahead of time and such:

This is worth pointing out because when we dig in later with strace, there will be 'noise' of these other operations in our investigation.

The writing mechanic, eg, puts(). writes to STDOUT which in this case has been rigged up through our job pipe. Speaking of job pipes!

Back to the pipe pipeline

So with generators and yes in mind, we can use in shell pipes, which falls into job control. TTY handling, shell interaction and job control is a very detailed beast I will not go into detail on here, but in layman's terms piping several commands together sticks em together in a job (which is within a session). This allows the whole operation to handled discreetly with signals, or 'bg', 'fg' and 'jobs' commands on any *NIX.

shell jobs

This is important to note because a job is created, a pipe handle is created to direct STDOUT (by default, you can specify which with 2>&1 and so forth) to STDIN between all the subprocesses of the job and we will see the handle id later. For our purposes, just know the job creates a pipe to funnel STDOUT through. This is not technically completely correct but if you know better put on your blinders (especially before seeing these doodles) shell pipe human centipede

(sorry, done playing with wonky watcom since i can't analog-type anymore)

show me the yes

Running yes in a basic pipe, something like this:

dmh@beer-disposal:~$ yes | head

if only the people behind asking for raises during performance reviews were more like yes

Prints exactly 10 'y'. Why 10? thats the default number of lines head will read as you can see in man head Yet, yes somehow knows to then terminate since no one is interested in reading the output anymore.

It seems to magically know when head is done. Changing the amount of lines head hovers up with eg, yes | head -n2 works as expected, printing two 'y' lines and going back to your shell prompt normally.

But yes spews 'y' endlessly! How does it know when to kindly STFU? This is handled with SIGPIPE! We can see this in the signal.h docs

"The SIGPIPE section denotes default action is 'T' (terminate) and 'Write on a pipe with no one to read it.'"

the write syscall docs also specify in failure conditions:

"An attempt is made to write to a pipe or FIFO that is not open for reading by any process, or that only has one end open. A SIGPIPE signal shall also be sent to the thread"

wait a minute, i thought yes used puts

Before we get lost in the weeds, things like printf and puts use the write() syscall to actually.. write output. This is handled by the C runtime library!

breakdown of what happens

Dry docs aside and ignoring non-essential details like buffered read/writes, here is what happens:

  1. yes writes batches of 'y' repeatedly to STDOUT (which behind the scenes is a job pipe)
  2. head reads from STDIN (the pipe until it satisfies 10 lines and then exits (closing its handle to the pipe)!
  3. yes is still furiously trying to write to same pipe, next write since head exits returns -1.
  4. yes receives SIGPIPE signal itself and also exits

This ignores some less important details like STDOUT line buffering and buffered read/write causing a difference in read/written bytes but it is not relevant to get the point of what is happening.

Digging deeper

We can see this by running the pipe operation through strace:

dmh@beer-disposal:~$ strace -f sh -c 'yes | head'

Here is the important parts highlighted, pid 6541* is yes and pid 6542** is head


On a side note, you may notice lots of other things going on in the strace, that is because this is GNU coreutils yes, which as noted above does a lot of other 'stuff' in name of locale handling and performance.

A drastically better looking strace can be achieved on BSD systems for the motivated

Ok whats your point

The next time a haskell hipster is spouting the benefits of lazily computed infinite lists, let them know unix has had a pragmatic version of the same thing since before they were probably born!

April 12, 2018

Windows 10 Wifi tweaks

Wifi has never been great on windows 10, especially after the latest 'fall creator update'. One of my wifi cards no longer works at all, despite working on this same exact version of windows before a clean install!

Anyway, I've found a few ways that seem to make the wifi slightly more consistent, so here they are

Step zero: Wifi adapter name

We will need the short name of the wifi adapter and the easiest way is with **netsh*.

To list wlan settings run the following in a cmd or powershell (Win+X):

netsh wlan show settings

PS C:\Users\Dexter> netsh wlan show settings

Wireless LAN settings
    Show blocked networks in visible network list: No

    Only use GP profiles on GP-configured networks: No

    Hosted network mode allowed in WLAN service: Yes

    Allow shared user credentials for network authentication: Yes

    Block period: Not Configured.

    Auto configuration logic is disabled on interface "Wi-Fi 3"
    MAC randomization not available on interface Wi-Fi 3

Look for the quoted part after interface, in this case my adapter name is Wi-Fi 3. Alternatively, you can go to

Control Panel -> Network Connections

And that will have the same name. We'll need it for the rest of the steps.

First: Wifi autoconfig on/off bats

Save these two in batch files so you can easily run them as needed:

WIFI_AUTO_ON.bat netsh wlan set autoconfig enabled=yes interface="Wi-Fi 3"

WIFI_AUTO_OFF.bat netsh wlan set autoconfig enabled=no interface="Wi-Fi 3"

Replace "Wi-Fi 3" with the adapter name from the first step!

Now you can right click on WIFI_AUTO_OFF.BAT and run as admin before playing an online game. For me, this seems to help with the persistent, roughly 1 minute apart ping spikes.

Second: disable tuning heuristics

This one is a bit harder to quantify but in my highly non-scientific testing, my ping appeared to be more stable overall after turning off auto tuning. Note this is machine wide and seems to be related to TCP window sizes. I've encounted no side effects but..

netsh int tcp set heuristics disabled
netsh int tcp set global autotuninglevel=disabled
netsh int tcp set global rss=enabled

Disable dat heuristic tuning

Hopefully this helps others eek out a remotely usable wifi experience on windows.

April 6, 2018

NetBSD cross-compiling tutorial

Lets build NetBSD starting from nothing!

Thanks the to the great system I've already gushed about, cross-compiling NetBSD from any other POSIX'y host is cake.

However, the docs are slightly out of date, and everything needed is scattered across a few pages which may not be obvious for new-comers. For example, grabbing 'sets' which is parlance for gzips of source code, is on a different page, and the kernel compile page is missing some new flags.

Similarly, man pages are great if you already have an idea of what you're looking for. But I'll save that diatribe for another day.

There is not much absolute-beginner orientated task-orientated documentation, too. Certainly the BSDs are niche and demand expertise but there is no reason to unnecessarily burden the learning curve for passerby's. This increases adoption!

I've also learned a few other things since writing my previous compiling notes. Particularly, the drivers in use when NetBSD is a virtualbox guest cause noticeably slower disk performance at the moment. I highly recommend to build sources from anything but a VM

So with that said, this will be a tutorial for building NetBSD from a GNU/Linux host. The instructions should be very similar for Mac and other BSDs.

Also, this will cover the 7.1.2 release, not current (WIP) sources To recap:

  • We will build NetBSD 7.1.2 from sources.
  • This is written from a GNU/Linux host
  • the target we will cross compile for will be amd64

Step 1: grab source sets

Navigate your big nasty web browser to the source sets directory here:

Grab all the tgz, you can skip xsrc.tgz unless you want to compile X windows.

I recommend saving all these in a tidy folder like ~/netbsd_build/7.1.2

Next, untar em all, eg manually:

$ pwd
$ tar xvzf gnusrc.tgz
$ tar xvzf sharesrc.tgz
$ tar xvzf src.tgz
$ tar xvzf syssrc.tgz
$ tar xvzf xsrc.tgz  #(again, only if you want to compile X)

if this pissed you off: write a shell for with wildcard expansion to do it in a oneliner!

Once that's all done you should end up with something like this:

dexter@slag:~/netbsd_build/7.1.2$ tree -L 2
├── gnusrc.tgz
├── sharesrc.tgz
├── src.tgz
├── syssrc.tgz
└── usr
    └── src

2 directories, 4 files

the funky sub usr/src dir is ok. Navigate into that directory for the rest of the instructions $ cd usr/src

Next, we need to make a directory to output objects during build, if we dont do this we will get errors. It doesnt really matter where, just make the directory in the usr/src subdir we are already in:

$ mkdir obj

We're ready to start compiling stuff!

Building toolchain

The first step to build NetBSD is to build the compiler it expects to use. Long story short, dont expect, or try to use your host compiler. It's much easier (albeit it can be slow on older machines) to use the provided gcc sources.

To do so, fire off like so:

$ ./ -U -m amd64 -O obj -j17 tools

A quick explaination of the flags:

  • -U: we are not building as root. this is needed
  • -m amd64: build for amd64 machine
  • -O obj: stick our build objs in this 'obj' folder we made above
  • -j17: use 17 threads. A fast-dirty value is number of cores/threads machine has. Its complex. Personally I add a few +/- for I/O overhead.
  • tools: lastly, what we want to make. Tools is a prerequisite to building the kernel and userspace

It will grind a while, especially if you're on older hardware but you should eventually get an output like this

===> Tools built to /home/dexter/netbsd_build/7.1.2/usr/src/obj/tooldir.Linux-4.13.0-38-generic-x86_64
===> ended:      Sun Apr  8 00:15:25 MDT 2018
===> Summary of results: command:    ./ -U -m amd64 -O obj -j17 tools started:    Sun Apr  8 00:12:01 MDT 2018
     NetBSD version:      7.1.2
     MACHINE:             amd64
     MACHINE_ARCH:        x86_64
     Build platform:      Linux 4.13.0-38-generic x86_64
     HOST_SH:             /bin/sh
     MAKECONF file:       /etc/mk.conf (File not found)
     TOOLDIR path:        /home/dexter/netbsd_build/7.1.2/usr/src/obj/tooldir.Linux-4.13.0-38-generic-x86_64
     DESTDIR path:        /home/dexter/netbsd_build/7.1.2/usr/src/obj/destdir.amd64
     RELEASEDIR path:     /home/dexter/netbsd_build/7.1.2/usr/src/obj/releasedir
     Updated makewrapper: /home/dexter/netbsd_build/7.1.2/usr/src/obj/tooldir.Linux-4.13.0-38-generic-x86_64/bin/nbmake-amd64
     Tools built to /home/dexter/netbsd_build/7.1.2/usr/src/obj/tooldir.Linux-4.13.0-38-generic-x86_64 ended:      Sun Apr  8 00:15:25 MDT 2018
===> .

Now we're ready to move on

Building a kernel redux

To kick out the kernel, we follow suite with a slightly different command:

$ ./ -U -m amd64 -O obj -j17 kernel=GENERIC

The new option kernel=GENERIC denotes which amd64 kernel config file we want to build. These configs live in the subdir sys/arch/<arch>/conf/ so in our case sys/arch/amd64/conf/GENERIC is the file. Ideally, you'd copy and modify this file to your desire. But for this example lets compile the stock GENERIC. While its out of scope for this scenic tour, pique your interest with the 'makeoptions COPTS` line.

Same looking output:

--- netbsd ---
#      link  GENERIC/netbsd
/home/dexter/netbsd_build/7.1.2/usr/src/obj/tooldir.Linux-4.13.0-38-generic-x86_64/bin/x86_64--netbsd-ld -Map --cref -T /home/dexter/netbsd_build/7.1.2/usr/src/sys/arch/amd64/conf/kern.ldscript -Ttext 0xffffffff80100000 -e start -z max-page-size=0x100000 -X -o netbsd ${SYSTEM_OBJ} ${EXTRA_OBJ} vers.o
NetBSD 7.1.2 (GENERIC) #1: Sun Apr  8 00:33:55 MDT 2018
   text    data     bss     dec     hex filename
14227197     654556  593920 15475673     ec23d9 netbsd
===> Kernels built from GENERIC:
===> ended:      Sun Apr  8 00:33:56 MDT 2018
===> Summary of results: command:    ./ -U -m amd64 -O obj -j17 kernel=GENERIC started:    Sun Apr  8 00:33:12 MDT 2018
     NetBSD version:      7.1.2
     MACHINE:             amd64
     MACHINE_ARCH:        x86_64
     Build platform:      Linux 4.13.0-38-generic x86_64
     HOST_SH:             /bin/sh
     MAKECONF file:       /etc/mk.conf (File not found)
     TOOLDIR path:        /home/dexter/netbsd_build/7.1.2/usr/src/obj/tooldir.Linux-4.13.0-38-generic-x86_64
     DESTDIR path:        /home/dexter/netbsd_build/7.1.2/usr/src/obj/destdir.amd64
     RELEASEDIR path:     /home/dexter/netbsd_build/7.1.2/usr/src/obj/releasedir
     Updated makewrapper: /home/dexter/netbsd_build/7.1.2/usr/src/obj/tooldir.Linux-4.13.0-38-generic-x86_64/bin/nbmake-amd64
     Building kernel without building new tools
     Building kernel:     GENERIC
     Build directory:     /home/dexter/netbsd_build/7.1.2/usr/src/obj/sys/arch/amd64/compile/GENERIC
     Kernels built from GENERIC:
      /home/dexter/netbsd_build/7.1.2/usr/src/obj/sys/arch/amd64/compile/GENERIC/netbsd ended:      Sun Apr  8 00:33:56 MDT 2018
==> .

The actual netbsd kernel binary will be sitting in our obj/sys/arch/amd64/compile/GENERIC/ dir. To actually install it you'd just copy it over your existing netbsd kernel.

dexter@slag:~/netbsd_build/7.1.2/usr/src$ file obj/sys/arch/amd64/compile/GENERIC/netbsd
obj/sys/arch/amd64/compile/GENERIC/netbsd: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, for NetBSD 7.1.2, not stripped


Next is userland. no frills here, you will see the pattern

$ ./ -U -m amd64 -O obj -j17 release

The only new option is just release. If you want to build X also pass in -x

Further reading

At this point I've just overlapped the official guide on building, which has all the details. Check it out once you are ready to make modifications and actually install your new bins!