Setting up GnuPG from scratch

The default setting of GnuPG are quite reasonable but often you will want to make changes to further improve the security and usability, one good reason to do this is to ensure your keys are in the newer pubring.kbx format rather than the pubring.gpg, it is still compatible but using the new format is recommended.

Firstly you want to export all your keys including secret keys, to do this run the following commands.

gpg --export --armor > pubkeys
gpg --export-secret-keys --armor > seckeys

Once this is done delete the contents of your GnuPG home folder, typically ~/.gnupg/ or GNUPGHOME, you may want to keep your trustdb.gpg and configuration files.

Ensure the private-keys-v1.d folder exists otherwise you will get an error when importing the secret keys, import all your keys in to the new keyring.

gpg --import pubkeys
gpg --import seckeys

If you did not keep your trustdb.gpg make sure you run the following:

gpg --update-trustdb

Recommended GnuPG Settings

GnuPG has a huge number of settings which you can adjust, but these are the most important ones  I recommend adding to your gpg.conf

- Expert mode gives you much more control, a must if you want to generate ECC keys

s2k-digest-algo SHA256
This sets the digest algorithm to SHA256 rather than the less secure SHA1

default-key (your key id)
- Sets the default key which is a good idea

- Asks for certification level when signing a key (recommended)

Upgrading from Debian Jesse 8 to Unstable

As we all know Debian is a very stable Linux distribution which is partly what makes it great, sometimes however you want access to newer packages, while it’s possible to mix stable packages and unstable it often leads to quite a mess, some packages have so many dependencies it’s often easier just to go entirely to unstable.

Unstable despite its name is actually fairly stable for the most part, so upgrading to it isn’t usually a big issue, upgrading is best done as a two step process, first from stable to testing, then testing to unstable, trying to go directly generally will not work except for a freshly installed base system, for you average user I’d recommend stopping at testing and then just install what you need from unstable since package problems can and do occur.

Upgrade Process

First you need to edit /etc/apt/sources.list and change jesse to testing like so, use a mirror closest to you for best performance:

deb testing main
deb-src testing main
deb testing contrib
deb-src testing contrib
deb testing non-free
deb-src testing non-free

Once that is done run the following:

sudo apt-get clean
sudo apt-get update
sudo apt-get dist-upgrade

All being well there should be no package errors here, go ahead and let it upgrade to testing, once it’s done it’s best to reboot and make sure everything is working, in some cases you may have to reinstall your gpu driver.

Once you’re happy everything is working you have two options, you can stay on testing and add the unstable repositories, or dist-upgrade to unstable, to get the latest packages you want, you can use the -t switch with apt-get, aptitude and synaptic to select the target release for example:

sudo apt-get -t testing install some-package
sudo apt-get -t unstable install some-package
sudo synaptic -t unstable

There is also an option in synaptic to set your preferred release.

If you do dist-upgrade to unstable be aware things can break from time to time, mostly packages that are being worked on, fixing this is a simple matter of switching whatever is broken back to the testing version, if you’re not comfortable with doing this stay on testing.

Building GNU GCC 6.2.0

The guide will show you how to build GCC 6.2.0, as always with this kind of thing results may vary, I tested this on Debian 8 x64 and it worked fine, although I did do a fresh build of binutils.


  • gcc, g++, binutils and make as well as your Linux kernel headers
  • Additional requirements are gzip, bzip2, tar,  perl, awk, GNAT (for Ada), DejaGNU, TCL and Expect, most of this should be present on most distributions.
  • A good amount of free disk space, 20GB suggested
  • You may need gcc multilib if you want to build a multilib version


Download GCC 6.2.0 here and unpack it, I recommend the following directory structure:


Before you build you need to install the prerequisite libraries GMP, MPC, MPFR and ISL in to the source tree, this can be done manually but it’s usually easier to use the included script.

cd gcc-6.2.0


One thing I always suggest adding is –disable-nls which remove the native language error messages which are not really needed.

cd build
 ../gcc-6.2.0/configure --prefix=/usr/local --enable-languages=c,c++
--disable-nls --host=x86_64-linux-gnu --build=x86_64-linux-gnu
--target=x86_64-linux-gnu --with-tune=generic

I strongly recommend reading the documentation to ensure you have it configured how you want, the host, build and target options are probably not needed unless it gets confused with what your system is as mine did.


The build process it quite painless although it will take some time since it does essentially three complete builds, this can be disabled with –disable-bootstrap but it’s not at all recommended except for testing, for a fairly modern machine expect 30 minutes, for an older machine you may need to give it some hours.

make -j9
 make check -j9
 make install

The number adjusts the number processor threads used, 1 plus your total is usually good and will give a significant speed boost.
If you’re really short on space try make bootstrap-lean -j9 instead.Running the check is extremely important, if you get more than a few errors it’s strongly recommended that you not use it.


gcc -v

You should get something like this:

Using built-in specs.
 Target: x86_64-linux-gnu
 Configured with: ../gcc-6.2.0/configure --prefix=/usr/local --enable-languages=c,c++ --disable-nls --host=x86_64-linux-gnu --build=x86_64-linux-gnu target=x86_64-linux-gnu --with-tune=generic
 Thread model: posix
 gcc version 6.2.0 (GCC)

Since /usr/local/bin is typically in at the front of your PATH variable it will replace your installed gcc, to prevent this I suggest using –program-prefix=prefix or –program-suffix=suffix or installing it outside of /usr/local.

A final test would be to compile something and check if it works as expected.

KeePass Password Manager

These days it’s more important than ever that you have a good strong password that’s different for each account you have, keeping track of all these can be quite complicated so having a password manage makes life much easier, you could of course write them down but that has its own risks such as someone reading them or more likely you losing them.

Many people rely on the password managers built in to web browsers, however this is a very bad idea, since there is often no encryption and it can be quite easy to fool the web browser in to giving the stored passwrds which is why I strongly advise people to stop using it.



Main window

KeePass is a very popular password manager which has been around since 2006, it has many of the features you would expect from such a mature application such as:

  • Encrypted password database
  • Password categories & search
  • Autotype system that doesn’t need browser plugin
  • Password generator
  • Plugins to add more options
  • Free and open source

KeePass was orginally written for Windows but now many ports are available for different operating systems, KeePassX in partiular supports many platforms.


Password generator

There are of course other applications available that do more or less the same, however most are not open source and many depend on cloud based storage which in my opinion could be a security risk since the database is out of your control.

After using KeePass for quite a few years I could never go back to the old way of managing passwords, using this I can use passwords far longer than I could ever be botherd to type which greatly improves security, unfortunately there are still some websites out there that have arbitary limits on password length, PayPal in particular is a good example of this stupidity.

How not to lose your password database

With KeePass which is not cloud based there is always a risk you could lose the password database, one way around this is to have a cloud hosting account with a simple password you can easily remember, it isn’t vital to protect the database since it’s already encrypted, you can then synchronize the database whenever you make changes.

Like anything it’s still a good idea to make periodic offline backups.

Cataclysm: Dark Days Ahead


Cataclysm: Dark Days Ahead is a very fun open source roguelike game currently under active development, unlike a traditional fantasty roguelike this is set in a post apocalyptic world that has been overrun with undead zombies and various other monsters, the only goal is for you to survive, which, like most roguelikes is quite a challenge.

The World

As is typical there is quite heavy use of proceedural generation to produce the game world, this consists mainly of random cities connected by roads, in each city there are various buildings such as houses, hardware shops, grocery stores, etc, which can all be looted provided you can get past the monsters that is.


World map showing explored areas

In addition to the standard buildings there are a variety of larger buildings such as shopping malls, hospitals and apartment buildings along with much more dangerous but highly rewarding locations.

Despite the world being essentially unlimited it doesn’t really feel empty, there is always a zombie or human (friendly or not) to to keep you company, you will also come across various situations such as a drugs deal gone bad for some easy loot, as well as situations that could get you brutally killed.

The Character

Once a world has been generated you make a character by choosing their starting scenario, profession, traits, attributes and skills, each of these costs points which you can gain by picking something negative, such as a harder starting scenario.

Like a real person your character needs to eat, drink and sleep, for new players this can be a fairly daunting task but you’ll eventually get the hang of it, you will likely die quite often and like most roguelikes this deletes your save, however your character will remain in the world so you can potentially recover your equipment, assuming your dead corpse has not walked off yet.

Performing various actions increases your skills which make you more effective in combat as well as unlocking new crafting options which are vital for your survival, you can also read books you find to rapidly increase your skills.


Game interface with tileset graphics

One particularly interesting aspect is that each item has a volume as well as a weight, so for example empty plastic bottles may have little weight but they take up significant volume, good choice of clothing and other accessories can increase how much you can carry at the expensive of greater encumberment, clothing is also vital to keep you warm particularly in winter, this may sound inconvenient but there are plenty of ways to move a large number of items.

The Crafting

The crafting and construction system is one of the highlights of Cataclysm, there is a huge number of useful, and not so useful items you can produce from high quality food to improvised firearms, the construction system also allows you to build you own shelter among other things.


Crafting screen

Vehicles play an important role, since the world is quite large and resources limited you may find you eventually need to move, vehicles are essentially mobile bases and you can outfit them as you wish (provided you have the skill and parts), in a way this gives the game a very Mad Max vibe as you mow down zombies with your giant death fortress.


Cataclysm is a very difficult game to master but also very rewarding, new content is constantly being added and there are a variety of mods that add even more stuff, don’t let the simple graphics and controls put you off giving this game a go.

I suggest downloading the experimental build as the stable one is extremely out of date.

Blocking Advertising

Internet advertising is one of the biggest risks to your security and privacy so it’s important to block it if you value these things, some might argue it’s wrong to block advertising but when it puts you and your computer at risk there is no other option.

Internet advertising can be blocked by three main methods:

  • Hosts file
  • DNS filter
  • Browser plugins

Most people these days use browser plugins such as AdBlock, Adblock Plus or my personal preference and recommendation uBlock Origin, these in general do a very good job at blocking advertising but don’t work outside the web browser, they can also be used to remove unwanted elements from a web page giving you a cleaner browsing experience.

Hosts File

The hosts file is a little more complicated to explain, when you go to a website such as your computer needs to lookup the domain name to obtain the internet address such as, this is done by contacting a DNS server which is typically provided by your ISP, however in the early days of the internet there was no DNS servers, instead it looked in a hosts file which manually maps domain names to ip addresses, for example:

The hosts file typically has priority over the DNS server so you can use it to override the domain name resolution, this is usually done by redirecting the domain to the local loopback address which is or, this effectivly blocks the domain.

On windows the hosts file can be found at C:\Windows\System32\drivers\etc\hosts
On Linux and most other UNIX based systems it can be found at /etc/hosts

To make things easier you can find hosts files online that already block the majority of advertising providers and other unsafe domains, I’m currently using Steven Black’s hosts file which is compiled from several different reliable sources.

This applies to all applications on your system but I still recommend it be combined with a browser plugin for maximum coverage.

DNS Filter

Rather than at the hosts file the blocking can also be done at the DNS server level, you can either do this by setting up your own DNS server or by using a public DNS service such as OpenDNS.

Personally I don’t use these public services out of privacy concerns but if you want a very simple method that needs no maintenence this might be for you, one big advantage of this is that it works on devices where you cannot typically access the hosts file.

If you have a Raspberry Pi laying around consider installing pi-hole on it for a super simple hardware DNS server.

Advanced Blocking

Sometimes you may run in to an advert that is not blocked by any of your installed methods, in a web browser it’s easy to add new blocking rules but outside you may need to find which domain it’s coming from.

This is easily done by tools such as Process Explorer or Wireshark which can show all HTTP connections, with a little effort this is usually able to locate the offending domain, for cases where the connection is made directly by IP address you can block it using a firewall such as TinyWall or Windows Firewall.

Foreground Reference Utility

This is a very handy tool I found some time ago that lets you overlay an image on the screen, you can adjust the opacity as well as freeze it so you can manipulate the window below it, similar to a layer in photoshop or any other image editing program.

This is incredibly useful in a number of situations where a method to overlay an image is not available, I often use this in PCB design to verify my dimension are correct when drawing a component footprint.


Once you have the reference image where you want you can lock it by clicking the ‘Overlay’ button, to unlock it again just hit F1.

SHA-1 Checksum: 8aca82bdc28e02493e4364688f6c569cd6600f5b

File Compression Guide

When sending files online good compression is essential for saving bandwidth, while many people these days have quite fast download speed, there are even more that have speeds below 8Mbps.

File compression consists of two parts, the archive and the compression algorithm, many archive formats support various compression algorithms, the most noticeable example of this is the .tar archive, when compressed it’s common practice to add the type of compression as a suffix, for example .tar.gz, .tar.bz2, other formats like .zip, .rar and .7z often specify a preferred compression method.

For this article I’m going to be using 7-zip which offers a variety of compression algorithms and archive types, it’s also completely free and open source.


This test will be done on three different types of files, the first being the nvidia driver installer (361.91-desktop-win10-64bit-international-whql.exe), the second being a PDF book and the third a large plain text file, this is important since the compression ratio depends on the file type, for instance installers are typically already compressed so I expect minimal compression there.

Files Uncompressed Size
Installer 321 MB (337,507,360 bytes)
 PDF Book 114 MB (120,225,893 bytes)
 Text File  9.13 MB (9,584,473 bytes)

For the first benchmark I will be compressing each with LZMA2 using the 7z archive which is the default and recommended for 7-zip, other options are at defaults, compression level normal, dictionary size 16MB, word size 32, solid block size 2GB, CPU threads 2.

Files Compressed Size Compression Ratio Compression Time
Installer  321 MB  100%  ~43 seconds
PDF  109 MB  95.6%  ~17 seconds
Text  1.40 MB  15.3%  ~4 seconds

As we can see from these results plain text has by far the best compression ratio, while the installer did not benefit at all, in some cases this may actually increase the size, the PDF had a reasonable improvement in size but this is dependent on how the PDF is compressed.

Now let’s try again but with the compression level set to ultra.

Files Compressed Size Compression Ratio Compression Time
PDF 107 MB 93.8% ~26 Seconds
Text 1.39 MB 15.2%  ~4 seconds

The results of this are rather interesting, the installer caused 7-zip to freeze on ultra so I was unable to see if there is any compression, the PDF shows a reasonable gain at the cost of compression time while the text file remains mostly the same.

Compression level isn’t the only thing you can tweak, dictionary size can have a major effect on the compression ratio but also enormously increases the memory requirement for compression and decompression, the default 16MB is rather conservative, ultra defaults to 64MB which is much better but you can get a little more by increasing it, generally above 128MB gives minimal gain.

This test is a little unrealistic as often you will be compressing many files, let’s try a mix of different file types with an uncompressed size of 132MB

Compression Compressed Size Compression Ratio Compression Time
Default  117 MB  88.6%  ~9 seconds
Ultra  90 MB  68.18%  ~24 seconds
Ultra + 128MB Dict  89.7 MB  67.95%  ~22 seconds

I was a little surprised by these results that a larger dictionary size actually took less time, it really goes to show that the types of files determine how far you can compress more than anything else.


I was expecting more definitive results as to what is better but as these tests show it varies on a case by case basis, I would certainly recommend you stick to LZMA2 as various benchmarks by many people have shown it to be the best in terms of compression ratio, memory and for the most part compression time, things like .zip with deflate (I.E winzip) should be avoided these days.

If you really need good compression then the only true way to do it is to test various settings for what you are trying to compress.

For things like video, audio and images, compression isn’t really the answer, using a different format or codec is the way to go since compression can only go so far.

Installing Gentoo Linux Tips


Gentoo is a very popular source based distribution primarily intended for more experienced Linux users, although really it’s not as hard as people make it seem, certainly you should have experience using Linux and be comfortable using the terminal.

Since it would be a waste of my time to repeat the excellent Gentoo handbook I’m going to just cover the bits that may cause your first time user trouble, it’s recommend you do your first install in a virtual machine rather than on a physical machine so you can get a feel for it.


  • Around 40GB free disk space for a decent install
  • A reasonably fast CPU
  • Access to the Gentoo handbook throughout the install
  • 1GB of RAM or more

The fast CPU is so you can get it installed in a short amount of time, compiling is an intensive task that can take days on a slower machine, for comparison a 1.8GHz Intel Celeron took me around 40 hours for a full desktop install, so I recommend at least a dual core processor, of course if you’re not in a hurry that’s fine.

It’s important you do your research before proceeding with the install, some thing you really need to know are:

  • What hardware do you have, use lspci and lsusb or other tools
  • Is your hardware supported in the kernel

The easiest way to check this is to run a Linux distribution, the Gentoo desktop live cd will work fine for this, if all your hardware works then your good to go, although you should note down the loaded modules so you can optimize your kernel to just what you need.

Finally it’s a good idea to note down your network configuration, particularly if you’re not used to setting up your network from the terminal.

Base System

For this install I’m going to assume you’re installing x86_64 (64 bit), most of this will apply to x86 (32 bit) as well.

First get the Gentoo minimal install cd from here, once it’s downloaded you can burn it to a cd, or as I’d recommend instead a flash memory stick with unetbootin since the image is updated very frequently.

When booting you should be asked to select your keyboard key map, if for some reason you can’t select it or need to change it later use loadkeys.


Wireless in general is a pain in the ass when it comes to Linux in my opinion, mainly due to highly variable support, it’s often easier to buy a well supported adapter than try get a poorly supported one working.

To connect to a WPA-PSK secured network, as most are these days you need wpa_supplicant which is included on the install cd, you first need to make a configuration file for your network:

wpa_passphrase [ssid] [passphrase] > /etc/wpa_supplicant.conf

The ssid is the network ID you wish to connect to, the output is stored for later use, if you don’t know which is your network you can use the either of the following commands to scan:

iwlist [interface] scan
iw dev [interface] scan

The interface is the name of your wireless interface which should be displayed if you type iwconfig, if you don’t see anything it usually means the driver is not loaded or not available.

Once you have your configuration file you can connect to the network with:

wpa_supplicant -i  [interface] -c /etc/wpa_supplicant.conf -B -D [driver]

The -B option runs the wpa_supplicant daemon in the background so you may want to omit it the first time you run so you can check for errors, for the driver wext or nl80211 are the most common, nl80211 is preferable if supported.

Once it’s connected run the DHCP daemon to auto configure the network:


If all goes well your wireless network should now be working.


For a wired ethernet connection this will usually work right away without any configuration, if your network adapter appears when you type ifconfig you’re generally good to go, run dhcpcd if needed or perform a manual configuration, check the ifconfig man pages for more info.

Setting up disks

The most basic partition scheme you can really go with is:

Partition    Usage       Size    Filesystem
/dev/sda1    BIOS Boot   2MB     none
/dev/sda2    Swap        4GB     swap
/dev/sda3    /boot       128MB   vfat
/dev/sda4    /           ~       ext4

I strongly recommend you carefully read the difference between MBR and GPT, if in doubt go for GPT, the above partition scheme should work in either case.

I generally recommend that if you’re dual booting with Windows that you put your Linux install on a different disk, this helps to avoid any problems with the Windows bootloader.

Setting up compile and USE flags

This is one of the more important bits to get right, for compile options you should not go over the top, -O2 -pipe -mtune=native is good enough 99.9% of the time, for 1GB of memory and below do not use -pipe, for the USE flags you really need to think ahead about what you want your system to do, in particular if you ever want to run 32 bit applications put in the multilib USE flag right away, also make sure you set MAKEOPTS=”-j9″ as well since it speeds up compilation a huge amount, the number you use should be the total number of logical CPU cores plus 1.

If you run in to segmentation faults when compiling, as can happen if you don’t have enough memory, put the following in /etc/portage/make.conf


Remove it when no longer needed.

When it comes to setting your profile you should generally go for Desktop, otherwise you’ll need to add a whole bunch of USE flags, finally when you emerge a package always use –ask and look at the flags in blue, consider if you may need the features those flags provide now or in future.

Kernel Configuration

This is a really critical step, not getting this right can in certain cases cause serious issues (like forgetting wifi support), other minor problem can be fixed by reconfiguring the kernel.

Take a good amount of time to read through all the configuration options, some may not make any sense but it general you don’t have to worry too much as the defaults are mostly sensible, if you have any doubts use genkernel, you can always tweak things later.

Only thing I would really change always is to increase the scroll back buffer as the default is tiny in my opinion.

For driver support save the configuration and open the .config file in nano, have a search through and you should find your needed drivers if they’re available in the kernel.

Network Configuration

This is one place where the handbook failed me, in the end I had to put the needed commands to launch wpa_supplicant in a script in /etc/init.d in any case this isn’t difficult to do but keep it in mind if you run in to the same problem as me.

If you’re using wifi make sure you emerge these packages before you reboot otherwise you are screwed:

  • net-wireless/wpa_supplicant
  • net-wireless/iw

After Installation

Once you’ve rebooted in to your new Gentoo installation you can start installing more packages, Gentoo for a source distribution is very easy to use, during the months that I’ve been using it I only had one minor package issue.

If you made a serious mistake during the installation all is not lost, you can boot the install disk again, once you’ve mounted the partitions and chrooted you can fix whatever problem there is without doing a full reinstall.

If you do run into trouble make sure you visit the Gentoo IRC channel which has a lot of helpful people, or if the problem is with a specific package post on the forum.

Useful Free Windows Tools

There are lots of tools that greatly improve the usability of Windows but for one reason or another are often quite obscure.

Process Monitor


This is an extremely useful tool that allows you to monitor file activity, process activity, registry activity, network activity and more in real time so you can find out exactly what applications are up to.

You can get it here.

Process Explorer


This is a much more useful process manager than the one built in to Windows, aside from a wide range of resource monitors it can show all the loaded dll and other files which can be very handy if you run in to the common problem of being unable to delete a file since it’s open in another process, a lot more information is available as well such as active network connections, threads, GPU usage for example.

You can get it here.

Visual Subst


This is a graphical interface for the subst command which allows you to map folders to virtual drives, you can of course do it with the command but this is easier, particularly if you make regular changes.

You can get it here.



One of the most annoying aspects of Windows is how hard it is to kill some full screen applications, with Linux you can almost always switch to a virtual terminal and kill it there but with Windows unless you have a second monitor you’re stuck.

This handy little tool runs taskkill /f on the active full screen application when you hit ctrl + alt + F4, this is far more likely to work than the regular alt + F4 which can be ignored by programs, it also has a feature like xkill which lets you click on the window you wish to kill.

You can get it here.



Pretty much one of my favourite applications for Windows, it adjusts the color temperature of the monitor at night time to reduce eye strain and give better sleep, after using it for some time I can definitely say it helps, it may seem strange at first but your eyes quickly get used to the more orange color to the point where you don’t even notice it.

You can get it here, it’s also available for Linux and more.