How to retrieve the SHA1 fingerprint of a remote server certificate using OpenSSL

I needed to retrieve the SHA1 fingerprint of a remote IMAPS server I was working with. This command did the trick:

$ echo | openssl s_client -showcerts -connect </dev/null \
    | openssl x509 -fingerprint

By changing the server address and port, openssl is also able to query other services which use TLS:

$ echo | openssl s_client -showcerts -connect </dev/null \
    | openssl x509 -fingerprint

To get the SHA-256 fingerprint, I had to first download the certificate and then use -fingerprint -sha256:

$ echo | openssl s_client -showcerts -connect </dev/null \
    | openssl x509 >
$ openssl x509 -noout -fingerprint -sha256 -inform pem -in
SHA256 Fingerprint=F2:81:54:13:73:3A:7B:18:33:B1:49:C5:D8:A8:14:68:5B:A1:E1:24:C6:CC:0D:45:EA:0E:A6:A7:AA:7A:C1:08

And then MD5:

$ openssl x509 -noout -fingerprint -md5 -inform pem -in
MD5 Fingerprint=05:97:71:33:F6:18:3F:C1:50:8F:1E:79:47:F9:B6:57

Run salt in masterless mode

salt is an exceptional tool that helps system administrators manage servers and perform configuration management. salt can be configured to run in masterless mode where the salt states, grains and pillars can be distributed among many servers using git.

I use this pattern to manage personal servers spread across multiple regions which aren't connected to a single network where I can securely run a salt master. Instead of using the distributions package management tool, I use pythons virtualenv to install the salt package and libraries.

Let's get started by first installing the salt package within a virtualenv:

$ virtualenv /opt/virtualenv/salt
$ /opt/virtualenv/salt/bin/pip install salt

Create the base dir where the pillar, states, configuration, pki files and a few helper scripts live:

$ mkdir -p /srv/salt/ \
    /srv/salt/conf \
    /srv/salt/pillar \
    /srv/salt/pki \

Create a /srv/salt/ script which runs salt-call with a few extra options that configures salt-call to use a masterless setup:



sudo /opt/virtualenv/salt/bin/salt-call --local \
    --config-dir=/srv/salt/conf \
    --id=${HOSTNAME} \

Create the salt minion file which lives in /srv/salt/conf/minion:

file_client: local
    - /srv/salt/states
    - /srv/salt/pillar
  - /srv/salt/conf/grains.d
pki_dir: /srv/salt/conf/pki

Create the salt master file which lives in /srv/salt/conf/master:

open_mode: True
    - /srv/salt/states
    - /srv/salt/pillar

And lets create a test salt state which simply installs git. This state lives in /srv/salt/states/git/init.sls:


You can now call the script and pass it any args that salt-call accepts.

For example, lets call state.sls on our new git state:

$ bash /srv/salt/ state.sls git

This can also list grains:

$ bash /srv/salt/ grains.list

Extending on this, we can setup /srv/salt/states/top.sls:

    - git

So now state.highstate works as expected:

$ bash /srv/salt/ state.highstate

And setup a test pillar in /srv/salt/pillar/secrets.sls:

admin: secret

And /srv/salt/pillar/top.sls:

    - secrets

And pillars should now work:

$ bash /srv/salt/ pillar.items

To start sharing your salt states among other machines, commit /srv/salt to git. As long as the salt virtualenv is setup, you should have a way to manage multiple machines using salt without the need to run a salt master.


salt provides an easy portable way to manage servers using easy to understand syntax. By using a salt masterless setup, system administrators can manage servers that span regions and networks where it may not be feasible to setup and manage a salt master server.

Editors Note

I've been using this salt masterless pattern since 2013 to manage personal servers which run a wide range of distributions. The setup is working well for me so far and provides me with a way to manage services and packages using a single codebase on multiple machines which sit on different networks.

Configure the I2p i2ptunnel HTTP proxy to listen on all interfaces

By default, the i2p HTTP Proxy client tunnel will listen on

# lsof -n -i:4444
java    2078 i2psvc  117u  IPv6  75875      0t0  TCP (LISTEN)

To configure the i2p HTTP Proxy client tunnel to listen on all interfaces navigate to the i2ptunnel configuration section by visiting the I2P Router Console (listening on port 4444) > Web Apps > i2ptunnel > I2P HTTP Proxy. The change the Reachable by address within the dropdown menu from to

Click Save then Restart All.

Configure the I2P web console to listen on all interfaces

By default, the i2p web console application will listen on

# lsof -n -i:7657
java    1150 i2psvc   86u  IPv6  17751      0t0  TCP [::1]:7657 (LISTEN)
java    1150 i2psvc   90u  IPv6  17753      0t0  TCP (LISTEN)

To configure i2p to listen on all interfaces, edit /var/lib/i2p/i2p-config/clients.config replace:

clientApp.0.args=7657 ::1, ./webapps/


clientApp.0.args=7657 ./webapps/

Then restart the i2p service:

# systemctl restart i2p.service

lsof will show i2p listening on all interfaces:

# lsof -n -i:7657
java    1604 i2psvc   86u  IPv6  19974      0t0  TCP *:7657 (LISTEN)

Delete lease from dnsmasq

dnsmasq keeps track of the DHCP leases it has in a file which is defined by the dhcp-leasefile config option.

To delete a lease from dnsmasq, first stop dnsmasq.

$ sudo systemctl stop dnsmasq

Remove the lease from the file. On Ubuntu this lease file defaults to /var/lib/misc/dnsmasq.leases.

Once the lease is removed, start dnsmasq.

$ sudo systemctl start dnsmasq

A quick intro to background and foreground command control in bash.

Appending an & (ampersand) to any command run within bash will background the process.

For example:

for count in $(seq 1 10); do
    sleep 10m &
[11] 12775
[12] 12776
[13] 12777
[14] 12778
[15] 12779
[16] 12780
[17] 12781
[18] 12782
[19] 12783
[20] 12784

Will background 10 processes which will simply sleep for 10 minutes. Bash will return a table of job ids and their PIDs.

To list a table which shows all background processes use the built-in jobs command.

$ jobs -l

You can also use ps to list the background processes by requesting the processes which have the parent pid of your current bash shell.

$ ps -O user --ppid $$

By using ps we can refer to important information on the processes.

To connect to one of the background processes, use fg and the job id number.

$ fg 11
sleep 10m

To put job 11 back into the background you first have to suspend the process by using ^Z (Control-Z) and then running bg specifying the job id.

$ fg 11
sleep 10m
[11]+  Stopped                 sleep 10m
$ bg 11
[11]+ sleep 10m &

To put fg, bg and ^Z to practical use, lets say you were copying a large directory which was going to take a long time though you wanted to regain control of your current shell.

Suspend the current command using ^Z.

$ cp -a /media/backups/rsnapshot/hourly.0 /opt/restore
[1]+  Stopped                 cp -a /media/backups/rsnapshot/hourly.0 /opt/restore

Background the task with bg referring to the job id in the table.

$ bg 1
[1]+ cp -a /media/backups/rsnapshot/hourly.0 /opt/restore &

Confirm the process is running with ps.

$ ps -O user --ppid $$
19496 root     D pts/11   00:00:06 cp -a /media/backups/rsnapshot/hourly.0 /opt/restore

Correct the file tool from reporting the mime-type of a Python script as text/x-java

When using the file tool to report the mime type of a file, your distributions file-libs package which ships the patterns that detect the mime type of the script maybe slightly out dated.

For example, take the python script below.

#!/usr/bin/env python

import sys

def main():
    print("hello world")

if __name__ == '__main__':

Running file over this python script will report a mime-type of text/x-java for older distributions such as CentOS6 (see bug report #9999).

$ file --mime-type text/x-java

file will accept local magic data found in /etc/magic so by adding the following pattern below to /etc/magic, we should be able to have file report the correct mime-type for our hello world script.

0       regex    \^(\ |\t)*def\ +[a-zA-Z]+
>&0     regex   \ *\(([a-zA-Z]|,|\ )*\):$ Python script text executable
!:mime text/x-python
$ file --mime-type text/x-python

Setup sudo with google-authenticator for 2 Factor authentication on CentOS 7.

By configuring the google-authenticator PAM module with sudo, you can force system users to have to authenticate with one-time passcode and their system password in order to use sudo.


The Google Authenticator project includes implementations of one-time passcode generators for several mobile platforms, as well as a pluggable authentication module (PAM).

To set this up on CentOS7, we'll install the google-authenticator PAM module and update your server's PAM configuration.

First, install the tools required to build the google-authenticator PAM module.

# yum install -y git autoconf automake make libtool pam-devel

Clone the google-authenticator git repo, build and install the plugin.

# git clone
# cd google-authenticator/libpam
# ./
# ./configure
# make
# make install

This will install the google-authenticator binary and the PAM module under /usr/local.

Before continuing, login as root and do not exit from this login whilst making changes to your system. A mistake could lock you out from your root account.

Add to /etc/pam.d/sudo.

auth       required     /usr/local/lib/security/ forward_pass nullok
auth       include      system-auth
account    include      system-auth
password   include      system-auth
session    optional revoke
session    required

Its important that the path used to define is correct or else PAM may not be able to find and sudo will log an error.

Dec 21 09:25:34 server sudo: PAM unable to dlopen(/usr/lib64/security/ /usr/lib64/security/ cannot open shared object file: No such file or directory
Dec 21 09:25:34 server sudo: PAM adding faulty module: /usr/lib64/security/

It is also important that the is found before the 'auth include system-auth' line within /etc/pam.d/sudo.

Any user who will need to use sudo now needs to setup their secret key and google-authenticator settings which live in ~/.google_authenticator by simply running the google-authenticator binary on the server. They will be shown a QRCode that can be scanned into their two-factor authentication mobile app such as Authy or Google Authenticator.

Next time the user uses sudo, they will be asked for their system password and one-time passcode.

Once all your users who use sudo have setup their google-authenticator secret key, you should remove nullok from /etc/pam.d/sudo.

Use overlayfs as the storage driver in docker on CentOS 7.1

Ensure you have docker installed.

# yum install -y docker

Set the docker storage driver to overlay by editing /etc/sysconfig/docker-storage and setting the DOCKER_STORAGE_OPTIONS variable.

DOCKER_STORAGE_OPTIONS= --storage-driver=overlay

Restart docker.

# systemctl restart docker

Confirm that the storage driver is set to overlay and the backing filesystem is extfs.

# docker info

Storage Driver: overlay
 Backing Filesystem: extfs

Install and configure php5 fpm to run under a system user account using nginx as the web server on Ubuntu Trusty 14.04.

This tutorial will setup php5-fpm running under a system account and configure nginx to use the pool.

Whats php-fpm? From

PHP-FPM (FastCGI Process Manager) is an alternative PHP FastCGI
implementation with some additional features useful for sites of any
size, especially busier sites.

First setup a user which will run php5-fpm.

# adduser --system --shell=/bin/false \
    --home=/var/run/exampleuser \
    --ingroup nogroup \

Next, install the php5-fpm package.

# apt-get install --yes php5-fpm

Copy the default php-fpm www.conf pool file.

# cp /etc/php5/fpm/pool.d/www.conf \

Within the /etc/php5/fpm/pool.d/exampleuser.conf file, change the variable pool name to match the username.

; Start a new pool named 'www'.
; the variable $pool can we used in any directive and will be replaced by the
; pool name ('www' here)

Change the user and group variables to match the new username and group it belongs to.

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
;       will be used.
user = exampleuser
group = nogroup

Change the socket path to be within the new users home directory.

listen = /var/run/exampleuser/php5-fpm.sock

Restart the php5-fpm service.

# service php5-fpm restart

nginx will need to make use of the php5-fpm socket file. Within the server nginx block add a location directive which points to the socket.

location ~\.php$ {
    fastcgi_split_path_info ^(.+\.php)(.*)$;
    fastcgi_pass    unix:/var/run/exampleuser/php5-fpm.sock;
    fastcgi_index   index.php;
    include         /etc/nginx/fastcgi_params;

Restart the nginx service.

# service nginx restart