Upgrading from Debian Jesse 8 to Unstable

As we all know Debian is a very stable Linux distribution which is partly what makes it great, sometimes however you want access to newer packages, while it’s possible to mix stable packages and unstable it often leads to quite a mess, some packages have so many dependencies it’s often easier just to go entirely to unstable.

Unstable despite its name is actually fairly stable for the most part, so upgrading to it isn’t usually a big issue, upgrading is best done as a two step process, first from stable to testing, then testing to unstable, trying to go directly generally will not work except for a freshly installed base system, for you average user I’d recommend stopping at testing and then just install what you need from unstable since package problems can and do occur.

Upgrade Process

First you need to edit /etc/apt/sources.list and change jesse to testing like so, use a mirror closest to you for best performance:

deb ftp://ftp.uk.debian.org/debian/ testing main
deb-src ftp://ftp.uk.debian.org/debian/ testing main
deb ftp://ftp.uk.debian.org/debian/ testing contrib
deb-src ftp://ftp.uk.debian.org/debian/ testing contrib
deb ftp://ftp.uk.debian.org/debian/ testing non-free
deb-src ftp://ftp.uk.debian.org/debian/ testing non-free

Once that is done run the following:

sudo apt-get clean
sudo apt-get update
sudo apt-get dist-upgrade

All being well there should be no package errors here, go ahead and let it upgrade to testing, once it’s done it’s best to reboot and make sure everything is working, in some cases you may have to reinstall your gpu driver.

Once you’re happy everything is working you have two options, you can stay on testing and add the unstable repositories, or dist-upgrade to unstable, to get the latest packages you want, you can use the -t switch with apt-get, aptitude and synaptic to select the target release for example:

sudo apt-get -t testing install some-package
sudo apt-get -t unstable install some-package
sudo synaptic -t unstable

There is also an option in synaptic to set your preferred release.

If you do dist-upgrade to unstable be aware things can break from time to time, mostly packages that are being worked on, fixing this is a simple matter of switching whatever is broken back to the testing version, if you’re not comfortable with doing this stay on testing.

Blocking Advertising

Internet advertising is one of the biggest risks to your security and privacy so it’s important to block it if you value these things, some might argue it’s wrong to block advertising but when it puts you and your computer at risk there is no other option.

Internet advertising can be blocked by three main methods:

  • Hosts file
  • DNS filter
  • Browser plugins

Most people these days use browser plugins such as AdBlock, Adblock Plus or my personal preference and recommendation uBlock Origin, these in general do a very good job at blocking advertising but don’t work outside the web browser, they can also be used to remove unwanted elements from a web page giving you a cleaner browsing experience.

Hosts File

The hosts file is a little more complicated to explain, when you go to a website such as http://www.google.com your computer needs to lookup the domain name to obtain the internet address such as 216.58.198.110, this is done by contacting a DNS server which is typically provided by your ISP, however in the early days of the internet there was no DNS servers, instead it looked in a hosts file which manually maps domain names to ip addresses, for example:

216.58.198.110 http://www.google.com

The hosts file typically has priority over the DNS server so you can use it to override the domain name resolution, this is usually done by redirecting the domain to the local loopback address which is 127.0.0.1 or 0.0.0.0, this effectivly blocks the domain.

On windows the hosts file can be found at C:\Windows\System32\drivers\etc\hosts
On Linux and most other UNIX based systems it can be found at /etc/hosts

To make things easier you can find hosts files online that already block the majority of advertising providers and other unsafe domains, I’m currently using Steven Black’s hosts file which is compiled from several different reliable sources.

This applies to all applications on your system but I still recommend it be combined with a browser plugin for maximum coverage.

DNS Filter

Rather than at the hosts file the blocking can also be done at the DNS server level, you can either do this by setting up your own DNS server or by using a public DNS service such as OpenDNS.

Personally I don’t use these public services out of privacy concerns but if you want a very simple method that needs no maintenence this might be for you, one big advantage of this is that it works on devices where you cannot typically access the hosts file.

If you have a Raspberry Pi laying around consider installing pi-hole on it for a super simple hardware DNS server.

Advanced Blocking

Sometimes you may run in to an advert that is not blocked by any of your installed methods, in a web browser it’s easy to add new blocking rules but outside you may need to find which domain it’s coming from.

This is easily done by tools such as Process Explorer or Wireshark which can show all HTTP connections, with a little effort this is usually able to locate the offending domain, for cases where the connection is made directly by IP address you can block it using a firewall such as TinyWall or Windows Firewall.

File Compression Guide

When sending files online good compression is essential for saving bandwidth, while many people these days have quite fast download speed, there are even more that have speeds below 8Mbps.

File compression consists of two parts, the archive and the compression algorithm, many archive formats support various compression algorithms, the most noticeable example of this is the .tar archive, when compressed it’s common practice to add the type of compression as a suffix, for example .tar.gz, .tar.bz2, other formats like .zip, .rar and .7z often specify a preferred compression method.

For this article I’m going to be using 7-zip which offers a variety of compression algorithms and archive types, it’s also completely free and open source.

Testing

This test will be done on three different types of files, the first being the nvidia driver installer (361.91-desktop-win10-64bit-international-whql.exe), the second being a PDF book and the third a large plain text file, this is important since the compression ratio depends on the file type, for instance installers are typically already compressed so I expect minimal compression there.

Files Uncompressed Size
Installer 321 MB (337,507,360 bytes)
 PDF Book 114 MB (120,225,893 bytes)
 Text File  9.13 MB (9,584,473 bytes)

For the first benchmark I will be compressing each with LZMA2 using the 7z archive which is the default and recommended for 7-zip, other options are at defaults, compression level normal, dictionary size 16MB, word size 32, solid block size 2GB, CPU threads 2.

Files Compressed Size Compression Ratio Compression Time
Installer  321 MB  100%  ~43 seconds
PDF  109 MB  95.6%  ~17 seconds
Text  1.40 MB  15.3%  ~4 seconds

As we can see from these results plain text has by far the best compression ratio, while the installer did not benefit at all, in some cases this may actually increase the size, the PDF had a reasonable improvement in size but this is dependent on how the PDF is compressed.

Now let’s try again but with the compression level set to ultra.

Files Compressed Size Compression Ratio Compression Time
Installer
PDF 107 MB 93.8% ~26 Seconds
Text 1.39 MB 15.2%  ~4 seconds

The results of this are rather interesting, the installer caused 7-zip to freeze on ultra so I was unable to see if there is any compression, the PDF shows a reasonable gain at the cost of compression time while the text file remains mostly the same.

Compression level isn’t the only thing you can tweak, dictionary size can have a major effect on the compression ratio but also enormously increases the memory requirement for compression and decompression, the default 16MB is rather conservative, ultra defaults to 64MB which is much better but you can get a little more by increasing it, generally above 128MB gives minimal gain.

This test is a little unrealistic as often you will be compressing many files, let’s try a mix of different file types with an uncompressed size of 132MB

Compression Compressed Size Compression Ratio Compression Time
Default  117 MB  88.6%  ~9 seconds
Ultra  90 MB  68.18%  ~24 seconds
Ultra + 128MB Dict  89.7 MB  67.95%  ~22 seconds

I was a little surprised by these results that a larger dictionary size actually took less time, it really goes to show that the types of files determine how far you can compress more than anything else.

Conclusion

I was expecting more definitive results as to what is better but as these tests show it varies on a case by case basis, I would certainly recommend you stick to LZMA2 as various benchmarks by many people have shown it to be the best in terms of compression ratio, memory and for the most part compression time, things like .zip with deflate (I.E winzip) should be avoided these days.

If you really need good compression then the only true way to do it is to test various settings for what you are trying to compress.

For things like video, audio and images, compression isn’t really the answer, using a different format or codec is the way to go since compression can only go so far.

Updating Gnu GCC on Linux

GCC is a major part of any Linux system, it contains a C, C++, Fortran and Java compiler (plus some extras if compiled), it’s also the only compiler recommended to build the GNU C library (glibc) which is required to make a Linux system work, other alternatives are available but not as common.

Most systems use an older compiler for stability reasons since it has been significantly tested, however it’s sometimes desirable to use a cutting edge compiler for maximum performance, generally however you should not replace your system compiler unless you’re happy to deal with any bugs that may appear, this is mainly a concern on a source based system like Arch, Gentoo or BSD.

For this post I’m going to be installing GCC 5.30 on XUbuntu 15.10 x64 with the following libraries:

  • GMP 6.10
  • ISL 0.16
  • MPC 1.03
  • MPFR 3.1.3

You can use your systems version of these or build them with GCC which I chose to do.

Prerequisites

The follow tools must be on your system if you intend to follow this post:

  • Gnu Bash (Tested 4.3.42)
  • Gnu Make > 3.80 (Tested 4.0)
  • Gnu GCC > 4.9.x (Tested 5.2.1)
  • Gnu G++ > 4.9.x (Tested 5.2.1)
  • binutils > 2.25 (Tested 2.25.1)
  • awk (Tested mawk 1.3.3)
  • tar, gzip and bzip2 (for unpacking sources)
  • InfoZIP (Tested 3.0)
  • DejaGnu
  • TCL
  • Expect

On XUbuntu 15.10 x64 I only had to do the following:

sudo apt-get install g++ dejagnu

Setting up your build environment

For this I decided to use a separate partition mounted at /media/dev
Make a folder for your sources and build, you need around 15GB of free space, this is what I ended up with:

/media/dev
    sources/
    build/gcc-build/
    tools/

In the sources folder I downloaded and unpacked GCC 5.30 and the support libraries listed above, these are optional but I recommend you build them instead of using the system ones.

If you decide to build the support libraries you need to move or create a symbolic link to them in the gcc source directory so it ends up like this:

/media/dev/sources/gcc-5.3.0
            gmp/
            isl/
            mpc/
            mpfr/

Once that is done go to your build directory /media/dev/build/gcc-buld, unlike many applications you keep the build directory and source completely separate.

Configuration

There are a lot of configuration options available so I highly recommend you check out the official documentation and other sources (the official documentation is surprisingly sparse), this is what I used to configure:

../../sources/gcc-5.3.0/configure --prefix=/media/dev/tools --disable-nls --disable-multilib

Let’s take a look at each bit:

../../sources/gcc-5.3.0/configure
This is the path to the configure script, you could use a non-relative path here instead.

–prefix=/media/dev/tools
The prefix is where I want the fully built compiler and related files to be installed when I run make install, other good locations are /usr/local and /usr/opt, do not put it in /usr unless you are completely sure, also do not leave this unset or it will default to /usr.

–disable-nls
This disables the native language system which provides errors in your native language, unless you have trouble with English or are building it for distribution you should turn nls off.

–disable-multilib
Without this gcc will be built for x86 and x64, for my system I have no interest in 32 bit so it’s disabled, keep in mind you will need a 32 bit version of gcc and glibc installed in order to build multilib.

One thing you might want to add is –disable-werror as during the build process it may run into an error, this is nothing to worry about since you will be checking the compiler later.

Once the configuration is complete we can proceed with the build.

Compiling

The next steps are really simple but rather time consuming, for a complete build of gcc you should allocate at least 4 hours on a fast system (I built on an Intel Core I7 4790 and it took quite a while), for a very slow system you might want to run it overnight.

As for why this takes so long it’s due to two reasons, first Gnu GCC is a complicated bit of software and secondly it has to be built at the bare minimum three times.

The first build known as stage 1 uses your system compiler to build the new version of GCC, this is called bootstrapping, if all is well it goes on to stage 2 where it builds the compiler again with the new version you just built, this ensures the compiler is properly optimized, a final build is done and a comparison is made between stage 2 and 3, this verifies that the compiler is stable.

One important warning should be given though, if you’re going from a rather old compiler straight to the latest there is a very good chance the compile will fail or other strange errors will appear in the compiler, if this occurs you must used a version closer to the system compiler and build your way up to the latest, from a very old compiler this can take several steps so always start with the latest available compiler for your system.

The binutils version isn’t as important, if you want to you can put it in the source tree and it will be built along with gcc.

To compile run the following:

make -j9 bootstrap

The -j9 option tells make to use 9 parallel threads, this speeds up the process many times so make sure you include it, the number should be the total number of logical cores plus 1 (as a general rule), for an I7 with 4 physical cores and hyper threading this comes to 9.

If you don’t have much disk space try bootstrap-lean instead, this will take longer but should use less disk space (I have no idea how much less).

BOOT_CFLAGS can be set to adjust how it’s built, the default is -g -O2, feel free to adjust but be aware if you go too far it may break.

Do not run make without bootstrap, that’s only for when the versions are the same.

Testing

Once bootstrapping is complete, hopefully without errors you can move on to the checking process, this take a long time as well but it’s really a bad idea to skip unless you are repeating a build, let me say this again, it’s really a bad idea to skip.

To run the tests:

make -j9 -k check

The -k option ensures it will not stop on any error, at the end you will get a report consisting of:

  • Tests performed
  • Tests that passed
  • Tests that passed unexpectedly
  • Tests that failed
  • Tests that failed unexpectedly

Only the last one you really need to worry about, if the number is low (preferably zero) you should be okay, if it’s more than five you really need to stop and check them, it’s a good idea to report any failures on the gcc-testresults mailing list.

Installing

To install simply run:

sudo make install

Assuming you remembered to set your prefix it should all be in the right place, if you really want to install in /usr it’s a good idea to set either –program-prefix= or –program-suffix so it will have a different name to your system compiler.

Go to the bin folder and run ./gcc -v, you should see something similar:

Using built-in specs.
COLLECT_GCC=./gcc
COLLECT_LTO_WRAPPER=/media/dev/tools/libexec/gcc/x86_64-unknown-linux-gnu/5.3.0/lto-wrapper
Target: x86_64-unknown-linux-gnu
Configured with: ../../sources/gcc-5.3.0/configure --prefix=/media/dev/tools --disable-nls --disable-multilib : (reconfigured) ../../sources/gcc-5.3.0/configure --prefix=/media/dev/tools --disable-nls --disable-multilib
Thread model: posix
gcc version 5.3.0 (GCC)

Conclusion

It really isn’t that hard to update GCC and well worth it if you’re a programmer or just want to optimize your system as much as possible.

Installing Minecraft on Linux

Installing everyones favorite game (well okay not everyone) is fairly simple on Linux, as to why you’d want to install it on Linux a lot of players find they get much better performance, especially when you start adding lots of mods.

Java

The first thing you need to do is make sure you have Java installed, this can be done by typing:

java -version

You should get something like this:

java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

Mojang recommend that you use the Oracle JVM rather than IcedTea but I’ve not had any  real trouble with it on the latest 1.8.9 version of Minecraft, perhaps slightly slower but that’s all, if Java is not installed do a search for how to install for your particular Linux distribution, for Gentoo simply emerge jre and then oracle-jre-bin.

Installing Minecraft

Go to minecraft.net and download the launcher, once it’s done type:

java -jar Minecraft.jar

Login to your Minecraft account and install the latest, or your preferred version of Minecraft, once it’s done launch, if all is well it should run without problems, if you notice performance problems try the Oracle JVM instead.

Some older Minecraft versions may benefit from updating the included LWJGL library, particularly if you get input issues.

Browsing the Web Securely

Browsing the web is one of the most dangerous activities when it comes to keeping your computer secure, the vast majority of all malware and worse infections come through web exploits, this article covers some of the best ways to improve your security.

The Web Browser

Using a modern web browser that is regularly updated is one the most important thing you can do, Mozilla Firefox and Google Chrome are two of the most popular but there are plenty of others out there that are just as good such as Opera, Vivaldi, Chromium and SeaMonkey.

Most of these have versions for mobile devices, although in my opinion it’s best to avoid doing anything important on a mobile device, particularly with Android.

Block Advertising

A large percentage of malware is delivered through online advertising so it’s absolutely critical that you block it, white listing certain websites is also a bad idea since this can occur even on major websites like YouTube.

There are a variety of adblockers available, some of the most common being adblock plus and ublock origin, personally I recommend the latter as it uses less resources and allows no ads by default.

Another form of adblocking (also used for other purposes) is by using a custom hosts file, this stops the computer from connecting to the listed websites, this is best used in combination with an adblocker, one good hosts file can be found here along with usage instructions.

Browser Plugins

Plugins like Flash and Java are a big no if you’re looking for security, flaws in these can easily expose your system to serious infections, if you need to use them make sure you always have the latest version and keep it disabled until needed.

Javascript

The majority of serious malware makes use of Javascript in combination with known web browser flaws to gain unrestricted access to the system or some other kind of attack, disabling Javascript when visiting unknown sites is the best thing you can do, unfortunately Javascript is also used by almost all websites for interactive content.

One way to make this simpler is to use a browser extension such as NoScript, this by default blocks all scripts so you have to manually accept them, this is a little time consuming but it only needs to be done on your first visit to each web site, in addition it allows you more control over what the website can do.

Cross-site Scripting (XSS)

A cross-site script is a script that reads or sends content to another website, one simple example being loading an image hosted on another web site, the problem with this is that without proper care and design it’s possible to exploit XSS to read private data or inject malicious code.

The risk of this cannot be emphasised enough, many major websites such as YouTube, Twitter and Facebook have been attacked using XSS, the best way to prevent this is to use a browser extension that by default blocks all cross-site requests such as RequestPolicy.

Virtual Machine

Perhaps the only true way to ensure security of your computer is to browse the web in a virtual machine, this is often time consuming to setup but is well worth the effort, this way you can be reasonably sure that even if you are infected the infection will be contained to the virtual machine, a lesser kind of virtual machine is a virtual sandbox which basically creates an isolated container, this isn’t nearly as secure as a virtual machine but is much quicker to setup.

Secure Operating System

If you’re using Windows then you’re going to be at significantly higher risk of infection, simply due to the number of users, the quickest way to boost security is to switch to Linux, BSD or Mac OS X (if you can afford it), this is not for everyone however but is well worth giving it a try, these can also be used in a virtual machine.

Use Anti-virus Software

Having some anti-virus software installed is very important, this is usually the final barrier stopping an infection, particularly as most now scan any changes made so malware and other nasty stuff is caught before it can actually cause any problems, this comes at a small impact to system performance but the loss is well worth it.

Anti-virus software should not be confused with anti-malware software, most anti-malware software deals with minor things such as adware and tracking cookies, anti-virus software will often ignore these so it’s good to have both.

Password Security

In the event that a web site you used is compromised (all too common these days) it’s important that you have unique passwords for each website that you use, these can be hard to remember so a program like keepass is extremely useful, this also allows the use of much longer passwords helping to prevent dictionary attacks and brute force.

Suspicious Sites

Always look at the URL before you click a link, unusual domains like .tk and domains in countries like russia and china (assuming you don’t live there) should be avoided.

If your browser has the option or there is an extension available you should disable automatic redirects, there have been many cases where a normal site has been hacked and changed to redirect you to the attack site.

Downloads

Another common technique to catch people is drive-by downloads, this is where a download will randomly pop up when you reach an infected page (usually triggered by a script), always check the download name, size and file extension, if you’re even the slightest bit concerned scan it before opening, a final fail safe is to open any download in a virtual machine.

Another way to verify a download is to verify the checksum if available, any changes or corruption of the download will alter the resulting checksum, IgorWare Hasher is a free Windows tools you can use, Linux and BSD usually already have something installed.

HTTPS

HTTPS encrypts data being sent and received by your web browser with SSL, most websites support encryption but not all have it enabled by default, always be very aware when sending sensitive data that the website is using HTTPS, this is usually indicator by a padlock icon near the address bar and the URL starting with https://

A nice little browser extension is HTTPS Everywhere, this forces use of HTTPS where available among other features.

Autofill

Most web browsers can remember your password to make things easier and quicker, however this is a big security risk that is often targeted by malicious scripts or software, so it’s strongly recommended that you disable it.

Advanced Authentication

Many websites are now offering more advanced authentication using things like verifying your email address or sending you an SMS message rather than just password alone, this can be a bit annoying but for important accounts you should always enable it.

Conclusion

Good browsing security isn’t difficult, most of it comes down to common sense but hopefully you have learned something of use from this article.

Secure Communication With GnuPG

Now days with the increase in spying by organizations such as the NSA and reqular attacks against websites it’s more important than ever to secure your data and communication.

GnuPG is a free and open source OpenPGP implementation available on every major operating system that can provide strong encryption, before making use of GnuPG it’s important to understand some concepts.

Basic Concepts

Public Key

The public key is an encryption key that is intended to be publicly available, at the very least the sender will need your personal public key to perform encryption, public keys are also used to verify signatures so any unauthorized changes can be detected.

Secret Key / Private Key

The secret key is used to decrypt a message or file than has been encrypted by the matching public key, the secret key should as the name suggest be kept secret as it provides additional security on top of the passphrase, secret keys are also used to sign a file so persons with your public key can verify the file has not been changed.

Passphrase

A passphrase is used when performing decryption or signing a file, this generally follows the same rules as making a good password except it should be much longer, around 40-60 characters is considered typical which will be for all intents and purposes unbreakable.

Installing GnuPG

There are two main version usually installed stable and classic, these are run with the command ‘gpg2’ and ‘gpg’ respectively, in most cases you should always use the stable version of GnuPG, on Linux most distributions come with it already installed.

For Windows user the easiest way is to install gpg4win which can be downloaded with a graphical front end for easier use.

Using GnuPG

For this I’m going to assume you’re using the traditional command line and not a graphical front end, if you are you should check the documentation for your particular front end.

Generating a key pair

Generating a key pair (key pair being the public and secret key) is really simple, just run the following command:

gpg2 --gen-key

You will then be asked the following:

Please select what kind of key you want:
 (1) RSA and RSA (default)
 (2) DSA and Elgamal
 (3) DSA (sign only)
 (4) RSA (sign only)
Your selection?

In almost all cases you want option 1 which is considered by many to be the most secure, next you will be asked to specify the key size.

RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 

2048 bits should be the absolute minimum for good protection with 4096 being the optimal choice where speed or size isn’t a major concern, keep in mind slower machines may have trouble dealing with a large key.

Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0)

You will then be asked for an expiry date, it’s a good idea to set this since it provide a clear indication to users that they should update their key, a key that has expired can still be used normally, although many key servers will remove expired keys.

GnuPG needs to construct a user ID to identify your key.

Real name: TestKey
Email address: Test@Test.com
Comment: Test Key
You selected this USER-ID:
    "TestKey (Test Key) "

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? 

You will then be asked for your real name, email address and a comment to help identify the key, both for yourself and others.

Once this is done you will be asked for the passphrase, it’s a good idea to practice using the passphrase you have chosen before making the key since writing it down would defeat the purpose, there is also no way to recover a lost passphrase.

The key will then be generated, as GnuPG suggest you should move your mouse around randomly to help generate a truly random key.

gpg: key EA40ACC3 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   5  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 5u
pub   4096R/EA40ACC3 2015-11-08
      Key fingerprint = CA97 B246 2A38 A2EA 6A0D  0228 2CBE 7277 EA40 ACC3
uid       [ultimate] TestKey (Test Key) 
sub   4096R/C0523162 2015-11-08

This shows the keys were successfully generated, the key ID is EA40ACC3, it also shows the public key fingerprint which is used by the recipient to verify with you that the key they have received is correct, it also shows the key name, comment and email address along with the trust level, which for a newly generated key is ultimate (it’s your key you trust it fully), a subkey is also shown, this should not be confused with the secret key which is not shown here, GnuPG by default uses a separate subkey only for encryption with the ID here of C0523162.

Viewing your key pair

To view the public key again you can use:

gpg2 -k Test
pub   4096R/EA40ACC3 2015-11-08
uid       [ultimate] TestKey (Test Key) 
sub   4096R/C0523162 2015-11-08

To view the secret key:

gpg2 -K Test
sec   4096R/EA40ACC3 2015-11-08
uid                  TestKey (Test Key) 
ssb   4096R/C0523162 2015-11-08

In this case ssb is the matching secret subkey for the public subkey C0523162, if you run this command without Test it will show all keys in your keyring which is automatically generated the first time you run GnuPG.

Exporting and using a key server

It’s very very important you always have a secure backup of both your public and secret keys, to do this you can export them, to export the public key (public sub key is automatically included):

gpg2 --armor --export -o pub.asc Test

This exports the public Test key to the file pub.asc, the –armor option makes it output the key in plain text rather than binary, the result looks something like this:

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v2

IZKKutUkIBxjUTv7NJUOewfJ0HtD/O/vDTMdGsbyV8t2Fdc0GLnmrpuHaiwDlNHt
vYgFz3WVTaYKxMSES/DfJFf492Hfk4PFGn5xn/4MAcbCQ5LAQkYQViU5LrBJahDO
4tBb4ykayAIqDgqEOIGRBeaU65Zp7LSbxK1noHnHskZzI41uRFR0DbeqNXOTu6Dd
NuvdojaVnzfbnLoh2N58rkc9rAIJcQ3IJN3ZABEBAAGJAh8EGAEIAAkFAlY/j4AC
GwwACgkQLL5yd+pArMO+hQ/7Bgz+vZA2gXP5ceGh32MMIhCWgFPyehzHOqK50iOQ
JwrxWl+2qqQB/+xtQFgxXGfSDJTYhPAaVhpPObX09zeEgz1UH3Va2Rih/SH9C2Dw
swtNh/aqJ6OPGa//YNyk++rva/nQR5w5NTAx6iNBcFdneKnM5G++au0vfO5JjXtG
sAAN0h3v6U3W7lEKB02jhAQM/6xwVK4mvs8ff0pY+4Iem3UrrD0UzqpGnI2GEz5a
lZ+9NjSAJIeji2xSrWCoOV4Ru9a5Y1JX0DL8WKWTgJJM2Ga4Y5itVQj6pTueuyOK
eBfjVkejgmYvr4DbVk8OaJ2oopUJTIhPN7fanmMaW9Cp0lUW+Ml2fpBfPZEq4/XV
1J8eyIdw7OTl/X5GUg91WH2xZD2SCGSVhUFEzts+B1CUxfQNFViFn4ew77VWX+za
NRY8mQxjyVU2wNgdKBivu8+cmRtk+iLG5sXDpqaI1QlUpxwvkmm6I0PGV+EnOnvW
3yboQpWXuD4SZDl386hQaF09kGodnUp6Ct77l3Cex2Lo29pdnE5Mwu0fK3sJkcHI
oMffoKRD8zy6Jp7t0RbpiHlkt3a6CaSjZ8vT14djPb1sRn/jO+KDOKZVeFiE+JPD
F4WEMhk74QOqlQH1IvWjcCxCZWKvjNB4eXgdzLlbnZnI26AFwkEoxNqBPjmtUaQC
uvg=
=BGpH
-----END PGP PUBLIC KEY BLOCK-----

It’s very important you do not modify this in any way, this can now be stored or sent to whoever you want.

To export the secret key:

gpg2 --armor --export-secret-key -o sec.key Test

This should never be given to anyone but it’s also of critical importance that you have it safely stored, if you loose it you can never decrypt or sign again with the key.

Public keys are often stored on a public keyserver for convenience, to send your key to a keyserver (in this example I’m sending it to pgp.mit.edu):

gpg2 --keyserver pgp.mit.edu --send-keys EA40ACC3

To get a key from the keyserver use:

gpg2 --keyserver pgp.mit.edu --recv-keys EA40ACC3

It should be noted there is no way for you to delete the key from the server, some servers may delete keys that have expired or that have received a revocation certificate, this requires the secret key so it’s best done right after key creation.

gpg2 --output pub-revoke.gpg --gen-revoke EA40ACC3

To use this you import the revocation certificate and then send the key again to the keyserver.

gpg2 --import pub-revoke.gpg
gpg2 --keyserver pgp.mit.edu --send-keys EA40ACC3

Importing Keys

Importing a key is very easy for both public and secret keys:

gpg2 --import pub.asc
gpg2 --import sec.asc

A key that is imported is by default untrusted, to change this you need to go into key editing mode:

gpg2 --edit-key test

This will show the usage of each key and subkey, S being Sign, C being create certificate, A being authentication and E being encryption.

You can get the fingerprint by typing ‘fpr’ you should always use this to verify any key sent to you.

You can switch between the public and secret keys by typing ‘toggle’.
To change the trust type ‘trust’ and answer the following question, ultimate trust should only be assigned to keys you are 100% sure of.
When you are done make sure you type ‘save’.

Encryption and Decryption

Encryption is the main purpose on GnuPG, this can be done two ways, encrypting a message or encrypting a file.
To encrypt a message use:

gpg2 --armor -r Test -e

You can now type your message, when finished hit ctrl + d, the encrypted message will now be given:

-----BEGIN PGP MESSAGE-----
Version: GnuPG v2

hQEMAxzJ34dtBd7eAQgAjf/jdG58LvYt8//szu25fw5GGc4GoAosaWuGT6RmciD5
Pqj7SQlYFs0qodT42OYpSs1cSSdyR6Xz5XCJj/H+Z3FeOmHmQqdPbI5igsq7inZe
KU4to8vrMPifTavumhuXOoY/sBEWnoSwb95DeLfZs46rziFWIUrajzqsl5uXmO5U
2culeNUfga1ppad/P73WFr7m5sdQcnizVbs1w+ODxOKc082P1zzxfJ9x49uZQmTZ
dRRUp/ajPIPiYfckqlV68S9BYh4IjNtXL3dbf4BpxS98B07PQsmAgni39+J+Sie6
2nvLq0sDxdcv9Dxsd3eBYCUhCuUzpQHwx48tjXh5LNJGAa39Ctz5NHrt7he5vy3D
WXLSHOpv3eAQXr/7zpN1m325D+4P45XMZTcPDBjXO6CJVrHjRTDLE2rsT9EjksFN
VNi/MlRDVA==
=/qRd
-----END PGP MESSAGE----

To decrypt a message use:

gpg2 -d

Paste in the message and it should ask for the decryption key, if not try hit ctrl + d.
If the passphrase is correct you will see the original message.

Windows has a problem with not recognizing ctrl + d so this method as far as I can tell will not work.

Encrypting a file can be done like so:

gpg2 --armor -r Test -o test.txt.gpg -e test.txt

Since –armor outputs ASCII you can send the encrypted text in test.txt.gpg in an email as long as the recipient knows it’s a file and the intended file extension, or as an attachment.
To decrypt:

gpg2 -o test.txt -d test.txt.gpg

File Signing

Signing a file is a great way to ensure the recipient can verify that they have the correct unmodified file, this is similar in many ways to a file hash.
To sign a file do:

gpg2 --armor -u Test -b test.txt

This will create a test.txt.asc file which contains the file signature, this is known as a detached signature which is most often what you want, this should be included along with the file you are sending.

If you are sending an encrypted file you should generate your signature from the original file, under UNIX systems it’s common to sign a .tar archive before it gets compressed, for example with bzip2.
To verify the signature.

gpg2 --verify test.txt.asc

Conclusion

This may look like a lot of work but it’s really very simple once you’ve done it a few times, there are a bunch of friendly user interfaces such as GPA available so you don’t have to remember the commends, for quick reference you can type ‘gpg2 –help’, this guide doesn’t cover everything but it’s enough to get you started.

Online security is very important so it’s well worth taking the time to learn how to use encryption effectively.