Configuring a Tor Hidden Service

Tor hidden services allow various types of services (web server, telnet server, chat server, etc) to be operated within the Tor network. This allows both users and service operators to conceal their identities and locations. Just about anything that can be run on the clearnet can be run within the Tor darknet.

Setting up a hidden service on Tor is a simple process and depending on the level of detail, an operator can keep their service completely anonymous. Depending on your use-case, you may or may not choose to anonymize your service at all. For anonymous operation, it is recommended to bind services being offered to localhost and make sure that they do not leak information such as an IP address or hostname in any situation (such as with error messages).

For this guide, we assume a Debian Stretch (or similar) Linux system with a non-root, sudo user. It is also assumed that the target machine has been set up with some standard security practices such as disallowing root logins over SSH, and basic firewall rules. This Tor hidden service will be masked on the darknet, but if the hosting server is deanonymized, a malicious party could uncover the machine’s actual clearnet IP address and attempt to penetrate it or otherwise disrupt service. Depending on the software running the services you are hiding, you may wish to install into a virtual machine to limit damage to the system by code vulnerabilities.

Installing Tor

Before configuring a relay, the Tor package must be set up on the system. While Debian does have a Tor package in the standard repositories, we will want to add the official Tor repositories and install from there to get the latest software and be able to verify its authenticity.

First, we will edit the sources list so that Debian will know about the official Tor repositories.

$ sudo nano /etc/apt/sources.list

At the bottom of the file, paste the following two lines and save/exit.

deb http://deb.torproject.org/torproject.org stretch main
deb-src http://deb.torproject.org/torproject.org stretch main

Now back in the console, we will add the Tor Project’s GPG key used to sign the Tor packages. This will allow verification that the software we are installing has not been tampered with.

$ gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
$ gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -

Lastly, run an update and install Tor from the repositories we just added.

$ apt-get update
$ apt-get install tor deb.torproject.org-keyring

 

Configuring the Hidden Service

We will be editing the torrc file, so let’s bring it up in our text editor:

$ sudo nano /etc/tor/torrc

Going line by line in this file is tedious, so to minimize confusion, we will ultimately rewrite the whole file. We will implement logging into a file located at /var/log/tor/notices.log and assume the local machine has a web server running on port 80. Paste the following over the existing contents in your torrc file:

Log notice file /var/log/tor/notices.log

############### This section is just for location-hidden services ###

## Once you have configured a hidden service, you can look at the
## contents of the file ".../hidden_service/hostname" for the address
## to tell people.
##
## HiddenServicePort x y:z says to redirect requests on port x to the
## address y:z.

HiddenServiceDir /var/lib/tor/hs_name_of_my_service/
HiddenServicePort 80 127.0.0.1:80

#HiddenServiceDir /var/lib/tor/other_hidden_service/
#HiddenServicePort 80 127.0.0.1:80
#HiddenServicePort 22 127.0.0.1:22

After saving the file, make and permission a log file, then we are ready to restart Tor:

$ sudo touch /var/log/tor/notices.log
$ chown debian-tor:debian-tor /var/log/tor/notices.log
$ sudo service tor restart

If the restart was successful, the Tor hidden service is active. If not, be sure to check the log file for hints as to the failure:

$ sudo nano /var/log/tor/notices.log

Now that the hidden service is working, Tor has created the hidden service directory we defined in the torrc, /var/lib/tor/hs_name_of_my_service/. There are two files of importance within this directory.

There is a hostname file at /var/lib/tor/hs_name_of_my_service/hostname that contains the hidden service’s public key. This public key acts as a .onion address which users on the Tor network can use to access your service. Make a note of this address after reading it from the file with cat:

$ sudo cat /var/lib/tor/hs_name_of_my_service/hostname
nb2tidpl4j4jnoxr.onion

There is also a private_key file that contains the hidden service’s private key. This private key pairs with the service’s public key. It should not be known or read by anyone or anything except Tor, otherwise someone else will be able to impersonate the hidden service. If you need to move your Tor hidden service for any reason, make sure to backup the hostname and private_key files before restoring them on a new machine.

After restarting the hidden service, it may not be available right away. It can take a few minutes before the .onion address resolves on a client machine.

 

Example – Configure A Web Server with Nginx

Let’s use this hidden service to host a website with Nginx.

First, we will install Nginx and create a directory for our HTML files

$ sudo apt-get install nginx
$ sudo mkdir -p /var/www/hidden_service/

Now, we will create an HTML file to serve, so we need to bring one up in our editor:

$ sudo nano /var/www/hidden_service/index.html

Paste the following basic HTML and save it:

<html><head><title>Hidden Service</title></head><body><h1>It works!</h1></body></html>

Next, we will set the owner of the files we created to www-data for the web server and change the permissions on the /var/www directory.

$ sudo chown -R www-data:www-data /var/www/hidden_service/
$ sudo chmod -R 755 /var/www

We want to make some configuration changes for anonymity. First, let’s edit the default server block:

$ sudo nano /etc/nginx/sites-available/default

Find the block that starts with server { and you should see a line below that reads #listen 80;. Replace this line with to explicitly listen on localhost:

listen localhost:80 default_server;

Now find the line in the block for server_name  set the server name explicitly:

server_name _;

Next we need to edit the Nginx configuration file:

$ sudo nano /etc/nginx/nginx.conf

Find the block that starts with http { and set the following options:

server_name_in_redirect off;
server_tokens off;
port_in_redirect off;

The first option will make sure the server name isn’t used in any redirects. The second option removes server information in error pages and headers. The third option will make sure the port number Nginx listens on will not be included when generating a redirect.

Now we need to create a server block so Nginx knows which directory to serve content from when our hidden service is accessed. Using our text editor, we will create a new server block file:

$ sudo nano /etc/nginx/sites-available/hidden_service

In the empty file, paste the following configuration block. Make sure that the server_name field contains your onion address which you read from the hostname file earlier and not my address, nb2ticpl4j4hnoxq.onion.

server {
listen   127.0.0.1:80;
server_name nb2tidpl4j4jnoxr.onion;

error_log   /var/log/nginx/hidden_service.error.log;
access_log  off;

location / {
        root /var/www/hidden_service/;
        index index.html;
    }
}

After saving the file, we need to symlink it to the sites-enabled directory and then restart Nginx:

$ sudo ln -s /etc/nginx/sites-available/hidden_service /etc/nginx/sites-enabled/hidden_service
$ sudo service nginx restart

To test the hidden service, download and install the Tor Browser on any machine and load up your .onion address.

 

Example – Configure A Web Server with Apache

Let’s use this hidden service to host a website with Apache. Note: Many criticize Apache for leaking server information by default. Apache takes more effort to secure.

First, we will install Apache and create a directory for our HTML files

$ sudo apt-get install apache2
$ sudo mkdir -p /var/www/hidden_service/

Now, we will create an HTML file to serve, so we need to bring one up in our editor:

$ sudo nano /var/www/hidden_service/index.html

Paste the following basic HTML and save it:

<html><head><title>Hidden Service</title></head><body><h1>It works!</h1></body></html>

Next, we will set the owner of the files we created to www-data for the web server and change the permissions on the /var/www directory.

$ sudo chown -R www-data:www-data /var/www/hidden_service/
$ sudo chmod -R 755 /var/www

Now, we need to make a few changes to the Apache configuration. Let’s start by setting Apache up to only listen to port 80 on 127.0.1.1:

$ sudo nano /etc/apache2/ports.conf

Change the line Listen 80 to Listen 127.0.0.1:80 and save the file.

Now we will access the security configuration file:

$ sudo nano /etc/apache2/conf-enabled/security.conf

Change the line for ServerSignature to ServerSignature Off and the line for ServerTokens to ServerTokens Prod to restrict information the httpd reports about the server.

Then, we will make an edit to the main Apache configuration file to override the server name Apache uses:

$ sudo nano /etc/apache2/apache2.conf

At the very bottom of the file, paste the following. Make sure that the ServerName field contains your onion address which you read from the hostname file earlier and not my address, nb2tidpl4j4jnoxr.onion.

ServerName nb2tidpl4j4jnoxr.onion

Next, we will disable Apache’s mod_status module to turn off status information:

$ sudo a2dismod status

Now we need to create a virtual host so Nginx knows which directory to serve content from when our hidden service is accessed. Using our text editor, we will create a new server block file:

$ sudo nano /etc/apache2/sites-available/hidden_service

In the empty file, paste the following configuration block. Make sure that the server_name field contains your onion address which you read from the hostname file earlier and not my address, nb2tidpl4j4jnoxr.onion.

<VirtualHost *:80>

 ServerName  nb2ticpl4j4hnoxq.onion

 DirectoryIndex index.html
 DocumentRoot /var/www/hidden_service/

  CustomLog /dev/null common

</VirtualHost>

After saving the file, we need to symlink it to the sites-enabled directory and then restart Nginx:

$ sudo ln -s /etc/apache2/sites-available/hidden_service /etc/apache2/sites-enabled/hidden_service
$ sudo service apache2 restart

To test the hidden service, download and install the Tor Browser on any machine and load up your .onion address.

 

Conclusion

Your hidden service should be up and running, ready to server Tor users. Now that your relay is functioning, it may be a good idea to back up your hostname and private_key files mentioned earlier in the /var/lib/tor/hs_name_of_my_service/ directory.

I would strongly recommend taking a look at riseup.net’s Tor Hidden Services Best Practices guide to learn more about proper setup of your hidden service.

Additionally, subscribe to the tor-onions mailing list for operator news and support!

Sources

 

Configuring and Monitoring a Tor Middle Relay

The Tor network relies upon individuals and organizations to donate relays for user traffic. The more relays within network, the stronger and faster the network is. Below, we will create a middle relay which receives traffic and sends it off to another relay. Middle relays will never serve as exit points for traffic back out to the clear Internet (a job for an exit relay). Because of this, many see running a middle relay as a safer way of contributing to the Tor network as opposed to running an exit relay which could find an operator at fault if illegal activity or content exits his node.

For this guide, we assume a Debian Stretch (or similar) Linux system with a non-root, sudo user. It is also assumed that the target machine has been set up with some standard security practices such as disallowing root logins over SSH, and basic firewall rules. This Tor relay will be public, and should be secured like any public-facing server.

 

Installing Tor

Before configuring a relay, the Tor package must be set up on the system. While Debian does have a Tor package in the standard repositories, we will want to add the official Tor repositories and install from there to get the latest software and be able to verify its authenticity.

First, we will edit the sources list so that Debian will know about the official Tor repositories.

$ sudo nano /etc/apt/sources.list

At the bottom of the file, paste the following two lines and save/exit.

deb http://deb.torproject.org/torproject.org stretch main
deb-src http://deb.torproject.org/torproject.org stretch main

Now back in the console, we will add the Tor Project’s GPG key used to sign the Tor packages. This will allow verification that the software we are installing has not been tampered with.

$ gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
$ gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -

Lastly, run an update and install Tor from the repositories we just added.

$ apt-get update
$ apt-get install tor deb.torproject.org-keyring

 

Keeping Time

It is important that a Tor relay keeps accurate time, so we will change the timezone and set up the ntp client.

First, list timezones to and find which one corresponds to the location of the machine:

$ timedatectl list-timezones

Next, we set the timezone to the one for the machine’s location. Amsterdam is used below as an example.

$ sudo timedatectl set-timezone Europe/Amsterdam

Finally, install ntp:

$sudo apt-get install ntp

You can check your changes using the timedatectl command with no options:

$ timedatectl
      Local time: Sat 2017-12-30 21:49:25 CET
  Universal time: Sat 2017-12-30 20:49:25 UTC
        RTC time: Sat 2017-12-30 20:49:25
       Time zone: Europe/Amsterdam (CET, +0100)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: no
 Last DST change: DST ended at
                  Sun 2017-10-29 02:59:59 CEST
                  Sun 2017-10-29 02:00:00 CET
 Next DST change: DST begins (the clock jumps one hour forward) at
                  Sun 2018-03-25 01:59:59 CET
                  Sun 2018-03-25 03:00:00 CEST

 

Configuring the Relay

By default, all new relays are set up to be exit nodes. Since we want to create a middle relay, there is some configuration that needs to be done.

We will be editing the torrc file, so let’s bring it up in our text editor:

$ sudo nano /etc/tor/torrc

Going line by line in this file is tedious, and to minimize confusion, I will outline some configuration outlines and paste a sample torrc file below that you can use with minimal changes.

  • We don’t need a SOCKS proxy, so uncomment the line SOCKSPolicy reject *
  • We want to keep a separate log file, so uncomment the line Log notice file /var/log/tor/notices.log
  • We will be running as a daemon, so uncomment the line RunAsDaemon 1
  • We will be running monitoring via ARM, so uncomment the line ControlPort 9051
  • Relays need an ORPort for incoming connections, so uncomment the line ORPort 9001
  • It is recommended that a relay has an FQDN or at least a subdomain of one. If not, the machine’s IP address can be used. We will unncomment the line Address noname.example.com and use our address in place of noname.example.com
  • The relay should also have a nickname, so uncomment the line Nickname ididnteditheconfig and use our own nickname in place of ididnteditheconfig
  • Contact information should also be provided, so uncomment the line #ContactInfo Random Person and use our own info in place of Random Person
  • We will be running a directory port, so uncomment the line DirPort 9030
  • The most important option, we don’t want to allow any exits, so uncomment the line ExitPolicy reject *:*
  • Optionally, we may want to limit the bandwidth that Tor uses. To do so, uncomment the lines RelayBandwidthRate 100 KBytes and RelayBandwidthBurst 200 KBytes. These values are defined for one way transport, so note that the actual bandwidth rate above could be 200KB/s total (100KB/s for each input and output). Burst defines a maximum rate, so a burst of 200 KBytes means bandwidth could reach 400KB/s total (combined input and output). Many will likely want their relay to be considered Fast by the network, meaning that the relay’s bandwidth is in the top 7/8ths of all relays. At the time of writing, a rate of 500 KBytes/s seems to be on the low end of achieving this according to a Tor team member.
  • Optionally, we may want to limit the total traffic Tor uses over a period. To do so, uncomment the lines AccountingMax 40 GBytes and AccountingStart month 3 15:00. The AccountingMax value is defined for one way transport, so note that setting 40 GBytes could use 80 GBytes total (40 GBytes for each input and output). If the machine is on a provider that limits monthly bandwidth, it is a good idea to adjust this value to align with the provider’s data cap and adjust the AccountingStart values to reset to reset when the data cap does. NOTE: If you do set accounting, the relay will not advertise a directory port and you will not get directory connections. Relays with directories are expected to have a lot of bandwidth and limiting it will result in other nodes not making directory connections.

Now, here is a full sample torrc file for the tor middle relay:

## Configuration file for a typical Tor user

## Tor opens a SOCKS proxy on port 9050 by default -- even if you don't
## configure one below. Set "SOCKSPort 0" if you plan to run Tor only
## as a relay, and not make any local application connections yourself.
#SOCKSPort 9050 # Default: Bind to localhost:9050 for local connections.
#SOCKSPort 192.168.0.1:9100 # Bind to this address:port too.

## Entry policies to allow/deny SOCKS requests based on IP address.
## First entry that matches wins. If no SOCKSPolicy is set, we accept
## all (and only) requests that reach a SOCKSPort. Untrusted users who
## can access your SOCKSPort may be able to learn about the connections
## you make.
#SOCKSPolicy accept 192.168.0.0/16
#SOCKSPolicy accept6 FC00::/7
SOCKSPolicy reject *

## Logs go to stdout at level "notice" unless redirected by something
## else, like one of the below lines. You can have as many Log lines as
## you want.
##
## We advise using "notice" in most cases, since anything more verbose
## may provide sensitive information to an attacker who obtains the logs.
##
## Send all messages of level 'notice' or higher to /var/log/tor/notices.log
Log notice file /var/log/tor/notices.log
## Send every possible message to /var/log/tor/debug.log
#Log debug file /var/log/tor/debug.log
## Use the system log instead of Tor's logfiles
#Log notice syslog
## To send all messages to stderr:
#Log debug stderr

## Uncomment this to start the process in the background... or use
## --runasdaemon 1 on the command line. This is ignored on Windows;
## see the FAQ entry if you want Tor to run as an NT service.
RunAsDaemon 1

## The directory for keeping all the keys/etc. By default, we store
## things in $HOME/.tor on Unix, and in Application Data\tor on Windows.
#DataDirectory /var/lib/tor

## The port on which Tor will listen for local connections from Tor
## controller applications, as documented in control-spec.txt.
ControlPort 9051
## If you enable the controlport, be sure to enable one of these
## authentication methods, to prevent attackers from accessing it.
#HashedControlPassword 16:872860B76453A77D60CA2BB8C1A7042072093276A3D701AD684053EC4C
#CookieAuthentication 1

############### This section is just for location-hidden services ###

## Once you have configured a hidden service, you can look at the
## contents of the file ".../hidden_service/hostname" for the address
## to tell people.
##
## HiddenServicePort x y:z says to redirect requests on port x to the
## address y:z.

#HiddenServiceDir /var/lib/tor/hidden_service/
#HiddenServicePort 80 127.0.0.1:80

#HiddenServiceDir /var/lib/tor/other_hidden_service/
#HiddenServicePort 80 127.0.0.1:80
#HiddenServicePort 22 127.0.0.1:22

################ This section is just for relays #####################
#
## See https://www.torproject.org/docs/tor-doc-relay for details.

## Required: what port to advertise for incoming Tor connections.
ORPort 9001
## If you want to listen on a port other than the one advertised in
## ORPort (e.g. to advertise 443 but bind to 9090), you can do it as
## follows.  You'll need to do ipchains or other port forwarding
## yourself to make this work.
#ORPort 443 NoListen
#ORPort 127.0.0.1:9090 NoAdvertise

## The IP address or full DNS name for incoming connections to your
## relay. Leave commented out and Tor will guess.
Address tor.peer3.famicoman.com

## If you have multiple network interfaces, you can specify one for
## outgoing traffic to use.
# OutboundBindAddress 10.0.0.5

## A handle for your relay, so people don't have to refer to it by key.
Nickname peer3famicoman

## Define these to limit how much relayed traffic you will allow. Your
## own traffic is still unthrottled. Note that RelayBandwidthRate must
## be at least 20 kilobytes per second.
## Note that units for these config options are bytes (per second), not
## bits (per second), and that prefixes are binary prefixes, i.e. 2^10,
## 2^20, etc.
RelayBandwidthRate 2048 KBytes  # Throttle traffic to 2048KB/s (16384Kbps)
RelayBandwidthBurst 3072 KBytes # But allow bursts up to 3072KB/s (24576Kbps)

## Use these to restrict the maximum traffic per day, week, or month.
## Note that this threshold applies separately to sent and received bytes,
## not to their sum: setting "40 GB" may allow up to 80 GB total before
## hibernating.
##
## Set a maximum of 40 gigabytes each way per period.
#AccountingMax 400 GBytes
## Each period starts daily at midnight (AccountingMax is per day)
#AccountingStart day 00:00
## Each period starts on the 3rd of the month at 15:00 (AccountingMax
## is per month)
AccountingStart month 24 15:00

## Administrative contact information for this relay or bridge. This line
## can be used to contact you if your relay or bridge is misconfigured or
## something else goes wrong. Note that we archive and publish all
## descriptors containing these lines and that Google indexes them, so
## spammers might also collect them. You may want to obscure the fact that
## it's an email address and/or generate a new address for this purpose.
#ContactInfo Random Person 
## You might also include your PGP or GPG fingerprint if you have one:
#ContactInfo 0xFFFFFFFF Random Person 
ContactInfo famicoman[at]gmail[dot]com - 1DVLNHpcoAso6rvisCnVQbCFN8dRir1GVQ

## Uncomment this to mirror directory information for others. Please do
## if you have enough bandwidth.
DirPort 9030 # what port to advertise for directory connections
## If you want to listen on a port other than the one advertised in
## DirPort (e.g. to advertise 80 but bind to 9091), you can do it as
## follows.  below too. You'll need to do ipchains or other port
## forwarding yourself to make this work.
#DirPort 80 NoListen
#DirPort 127.0.0.1:9091 NoAdvertise
## Uncomment to return an arbitrary blob of html on your DirPort. Now you
## can explain what Tor is if anybody wonders why your IP address is
## contacting them. See contrib/tor-exit-notice.html in Tor's source
## distribution for a sample.
#DirPortFrontPage /etc/tor/tor-exit-notice.html

## Uncomment this if you run more than one Tor relay, and add the identity
## key fingerprint of each Tor relay you control, even if they're on
## different networks. You declare it here so Tor clients can avoid
## using more than one of your relays in a single circuit. See
## https://www.torproject.org/docs/faq#MultipleRelays
## However, you should never include a bridge's fingerprint here, as it would
## break its concealability and potentially reveal its IP/TCP address.
#MyFamily $keyid,$keyid,...

## A comma-separated list of exit policies. They're considered first
## to last, and the first match wins.
##
## If you want to allow the same ports on IPv4 and IPv6, write your rules
## using accept/reject *. If you want to allow different ports on IPv4 and
## IPv6, write your IPv6 rules using accept6/reject6 *6, and your IPv4 rules
## using accept/reject *4.
##
## If you want to _replace_ the default exit policy, end this with either a
## reject *:* or an accept *:*. Otherwise, you're _augmenting_ (prepending to)
## the default exit policy. Leave commented to just use the default, which is
## described in the man page or at
## https://www.torproject.org/documentation.html
##
## Look at https://www.torproject.org/faq-abuse.html#TypicalAbuses
## for issues you might encounter if you use the default exit policy.
##
## If certain IPs and ports are blocked externally, e.g. by your firewall,
## you should update your exit policy to reflect this -- otherwise Tor
## users will be told that those destinations are down.
##
## For security, by default Tor rejects connections to private (local)
## networks, including to the configured primary public IPv4 and IPv6 addresses,
## and any public IPv4 and IPv6 addresses on any interface on the relay.
## See the man page entry for ExitPolicyRejectPrivate if you want to allow
## "exit enclaving".
##
#ExitPolicy accept *:6660-6667,reject *:* # allow irc ports on IPv4 and IPv6 but no more
#ExitPolicy accept *:119 # accept nntp ports on IPv4 and IPv6 as well as default exit policy
#ExitPolicy accept *4:119 # accept nntp ports on IPv4 only as well as default exit policy
#ExitPolicy accept6 *6:119 # accept nntp ports on IPv6 only as well as default exit policy
ExitPolicy reject *:* # no exits allowed

## Bridge relays (or "bridges") are Tor relays that aren't listed in the
## main directory. Since there is no complete public list of them, even an
## ISP that filters connections to all the known Tor relays probably
## won't be able to block all the bridges. Also, websites won't treat you
## differently because they won't know you're running Tor. If you can
## be a real relay, please do; but if not, be a bridge!
#BridgeRelay 1
## By default, Tor will advertise your bridge to users through various
## mechanisms like https://bridges.torproject.org/. If you want to run
## a private bridge, for example because you'll give out your bridge
## address manually to your friends, uncomment this line:
#PublishServerDescriptor 0

After saving the file, we are ready to restart Tor:

$ sudo service tor restart

Now, we need to make sure everything worked properly and that the relay is functioning as expected. To do so, we will check the logs:

$ sudo nano /var/log/tor/log

If everything worked as expected, the following lines should appear near the bottom of the log file:

[notice] Self-testing indicates your ORPort is reachable from the outside. Excellent.
[notice] Tor has successfully opened a circuit. Looks like client functionality is working.
[notice] Self-testing indicates your DirPort is reachable from the outside. Excellent. Publishing server descriptor.

If you see anything different, make sure your torrc file is configured properly and that your firewall is set to allow connections to the ports you set for the ORPort and DirPort (by default, 9001 and 9030 respectively).

 

Monitoring Your Relay with Nyx

To monitor Tor relays, many people use a popular tool called Nyx which provides graphical information about activity and status of the node. Nyx utilizes the ControlPort we set earlier to connect into our relay. This port shot not need to be accepted by a firewall if Nyx will be running on the same machine and should be password-protected otherwise.

First, Nyx needs to be installed:

$ sudo apt-get install python-setuptools
$ sudo easy_install pip
$ sudo pip install nyx

Then,Nyx can be run:

$ nyx

The result is a nice representation of the relay’s traffic, utilization, flags, and general information.

 

Monitoring Your Relay with ARM (Deprecated)

While ARM is no longer maintained, it does still function.

To monitor Tor relays, many people use a popular tool called ARM which provides graphical information about activity and status of the node. ARM utilizes the ControlPort we set earlier to connect into our relay. This port shot not need to be accepted by a firewall if ARM will be running on the same machine and should be password-protected otherwise.

First, ARM needs to be installed:

sudo apt-get install tor-arm

Then, ARM can be run:

arm

The result is a nice representation of the relay’s traffic, utilization, flags, and general information.

 

Conclusion

While my relay picked up traffic quickly, it took a long time to be able to fully utilize the bandwidth rates that I gave it. A new Tor middle relay goes through many stages before it can be deemed stable and reliable by the network. I highly recommend reading The lifecycle of a new relay to understand the whole process and know why you may not see traffic right away.

You may notice in my screenshots of Nyx and ARM above, my relay has procured several flags. If you are trying to obtain certain flags for your relay (which sort of act like markers of the relay’s capabilities), I recommend reading this StackExchange post on the subject.

If you want to see some statistics for your relay or share them with others, consider checking out the Atlas and Globe projects. This provides information on a relay by fingerprint of that relay, though you can perform searches with the relay’s nickname. Check out my relay on Atlas and my relay on Globe for examples.

Now that your relay is functioning, you may wish to backup your torrc, backup your relay’s private key (/var/lib/tor/keys/secret_id_key), read and implement operational security practices, and join the tor-relays mailing list.

 

Sources

 

Running & Using A Finger Daemon

The finger application was written in the 1970s to allow users on a network to retrieve information about other users. Back before Twitter and other micro-blogging platforms, someone could use the finger command to retrieve public contact information, project notes, GPG keys, status reporting, etc. from a user on a local or remote machine.

Finger has mostly faded into obscurity due to many organizations viewing the availability of public contact information as a potential security hole. With great ease, attackers could learn a target’s full name, phone number, department, title, etc. Still, many embraced the reach that finger could provide. Notably, John Carmack of id Software maintained detailed notes outlining his work in game development.

These days, finger is usually found only on legacy systems or for novelty purposes due to much of its functionality being replaced with the more-usable HTTP.
 
Installing finger & fingerd

This guide assumes we are running a Debian-based operating system with a non-root, sudo user. To allow finger requests from other machines, make sure the server has port 79 open and available.

The first thing we will need to do is install the finger client, finger daemon, and inet daemon:

sudo apt-get install finger fingerd inetutils-inetd

The inet daemon is necessary to provide network access to the finger daemon. inetd will listen for requests from clients on port 79 (designated for finger) and spawn a process to run the finger daemon as needed. The finger daemon itself cannot listen for these connections and must instead rely on inetd to act as the translator between the sockets and standard input/output.

To ensure that we have IPv6 compatibility (as well as maintain IPv4 compatibility), we will edit the inetd.conf configuration file:

sudo nano /etc/inetd.conf

Find the section that is labeled INFO, and comment out the line under it defining the finger service:

#finger    stream    tcp    nowait        nobody    /usr/sbin/tcpd    /usr/sbin/in.fingerd

Now below it we will add two lines that define the service for IPv4 and IPv6 explicitly:

finger    stream    tcp4    nowait        nobody    /usr/sbin/tcpd    /usr/sbin/in.fingerd
finger    stream    tcp6    nowait        nobody    /usr/sbin/tcpd    /usr/sbin/in.fingerd

Then we will restart inetd to run the changes:

sudo /etc/init.d/inetutils-inetd restart

Now we can use the finger command against our machine:

finger @locahost

 
User Configuration

Each user will have some user information displayed such as real name, login, home directory, shell, home phone, office phone, and office room. Many of these fields are probably not set for the current user account, but many of these can easily be updated with new information.

The chfn utility is built specifically to change information that is retrieved by the finger commands. We can run it interactively by invoking it:

chfn

If we run through this once, we may not be able to edit our full name or wipe out the contents of certain fields. Thankfully, chfn takes several flags to modify these fields individually (and with empty strings accepted!):

$ chfn -f "full name"
$ chfn -o "office room number"
$ chfn -p "office phone number"
$ chfn -h "home phone number"

Now that our information is set, we can start creating files that will be served by finger.

The first file will be the .plan file. This is typically used to store updates on projects, but can be used for pretty much anything such as schedules, favorite quotes, or additional contact information.

nano ~/.plan

Next, we can create a .project file. This file is traditionally used to describe a current project, but can house any content provided it displays on a single line.

nano ~/.project

Next, if we have a GPG key, it can also be included via the .gnupg file.

gpg --armor --output ~/.gnupg --export "my name"

Depending on our machine’s configuration, we can also set up mail forwarding which will be shown when our user account is queried via a .forward file.

echo my@other.email.com > ~/.forward

Now that all the files are created, we need to change the permissions on them to allow them to properly be read by finger. This command will allow others to read and execute our new files:

chmod o+rx ~/.plan ~/.project ~/.gnupg ~/.forward

Afterwards, anyone with finger should be able to query the account provided the host is reachable and the port is exposed:

$ finger famicoman@peer0
Login: famicoman                        Name: mike dank
Directory: /home/famicoman              Shell: /bin/bash
Office: #phillymesh, famicoman@gmail    Home Phone: @famicoman
On since Wed Mar  1 18:28 (UTC) on pts/0 from ijk.xyz
   5 seconds idle
No mail.
PGP key:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQINBFhQteQBEADOQdY/JyNsRBcqNlHJ3L7XeMWWqG+vlGYjF5sOsKkWDrgRrAhE
gJthGdPSYLIn5/kRst5PkDGjZHFq1k4VAaUlMslCbLgj0xMzUWmhGRKtFjrnQzFi
UVaW/GcW682b5wKkEbSpwRrLHJ19cwYiQHRA6dahiCkWkdh7MluHKTwU1kaUrs3E
3satrSAlOJHH2bg5mDuQPTov/q6hot2pfq8jseQuwVflssqOt4tx3o0tcwbKJwQs
8qU3cVkfg+gzzogM5iMmKAjhFt9Ta7E2kp+iR8gOkH7CK2tu+WIpdpSYuIOe2nEa
AdYSGJRmIZxqwGPwZu893OiYTiLF2dGEWj4cSmZFJ9BYxw9b7dMePDs3l9T1DyJN
FF6JyqtiTpSOfw4+9oNL/+8kmRFMtQBGDH5Dqn4Bg+EYUUGWh3BQb1UGRGNRbls1
SYw4unPsMAaGd0tqafyEOE7kHcQwGMWyI4eFi0bkBRTCvjyqTqTImdr4+xRKn4rW
vS+O+gSSWnrKW67TF0vFuPKV4w4sdPhldcYZjiPNI1m6nmnih4756LW+W5fCKMyN
RN4yPmaF6awEM3fPf3BWhxDmsBvYLqLlE9b6DQ6DmIsC6x5S/jBFr/W0cgZKuMBR
wIHGIpCTgkVsvjTxTeDnZQzEuaZFOpHrYBYG0+JccE5ZZ2/aEupB1tLWzwARAQAB
tCtNaWtlIERhbmsgKEZhbWljb21hbikgPGZhbWljb21hbkBnbWFpbC5jb20+iQI+
BBMBAgAoBQJYULXkAhsDBQkJZgGABgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAK
CRAWGa5NfPKo9xBPEADGK7ol4nU1cDpYc6XPYb4w4s8G/9Ht3UGZvy4ClB7TvntR
HuixWISQyElK4pDntrpXfLmDgqNQqUtjev6w0uEEc7MQW0WBYPlJ9rmVuDJtPjOP
hr9wUnwrd6KypHGw1Y6qukf/w8gCDZyZ8BVOm3r6XT6VlpuVVBP16ax6Mh7sTf4i
/D8D1JH89zApfplqiiA+OAvunbm19jrczby5ILhevbwfpP+ob7FqevCQi/ppGncg
s2LZceIYPh09OzakBiZkIwzMsuWHYxrenUmg8kuVaVyaGXdCc+KZr7c4oGioLxOL
8p8PunjL+i/uGtvZ3tQEvHjB9r1Ghu1t5MUQC4xTvgLnvLOQA0gmknwRQ6nSWrQf
yDJPnaJcBKlfnCR7eKtotsTobUCWDMC9sEvLjLYMgxtEbu46/J57oDQ1LniejI9X
rTUN8cnRLpOQUy0eTBTtUWYqdqO9fAjvMIsnWR4IcMlTJazghygQ+zENvGlEfey2
NL4nC2Yus3EAeCEJC52vccSsf3b7HT/4GemNOVjh72kZ1FM6HL+UNaU7JLppNQk6
mFKC3/wIKxKBjm/vR21Efl+f2iwUAiLjrugaY6g4BXPX3p6iCHftEW7gA6s4UC0A
1HYScq9Duxv+AQpe/mfVA2SBrD3OLTknW2Z5EZSTHqyzRQtHviXTRkdtQscF7rkC
DQRYULXkARAAugwI5dpLpIYI4KZcHsEwyYqUL35ByGCuqklYGOSMkkX0/WFH2ugv
Vs8fqgrn/koXjKBPpxdElfTAGbD2MSTvAUzwMgvMaLZUxY2Qh1RipkNXvSAO+W6W
nJKyBvasboK8l61yta/RrPvUr+equxtBawD3ja9DPzfTYuVCewR6ztcdvqAho5D1
Ds5HxO6aLPsMW8Fj8kf+Ae5eypuNBAN0ivpkDfQyaungh2EjrVpJzWJ8ZqYah/cK
1R55rPhq/JtjcO8g2nnsi+L5EuMma9+50lVPHlHjO9Y0xaQMq2/rLJcDu8UbymX7
LSoxzixiLYtig3GAB+1XkIqMwaEkFF1zTlAZ7drAfhH1AJ0L7SQBur5J1EM3iRYO
XmelyxJPwKgL4K6U4NnnrVpf71GyBktXyuEiOWDFjINs2OzwL4z/l1oWuJuFW/vO
C/yM5Ed7yzLXm0exAnW8Y0u1hwbhRy31zIYbLeB+PuiAqrnvSr1xhAWalBM3dqF+
VIsuKlcoFOmX74/OCEFQ/cqrrkhQ/PrkZqQZCyjyUhkuIqasMZGhLakAIO226E1r
IsUX0jMliJ/A89ZyDmoZyyvRjyydCkOS5HXahFLCufPC8FgcAL6VcpVF2sHBmWcj
vOlfr7OXHzKvKVNy8qymzyfRCPHoYQvxW9UAhru/iIqKOTIDo3OwJ9EAEQEAAYkC
JQQYAQIADwUCWFC15AIbDAUJCWYBgAAKCRAWGa5NfPKo91+XD/0a+gcsXpKo2gy+
oQC/osyJhesx2CGzZqmSB3fpyq9D+jnsCzt/Bh/EtR+2sUWxokIVc5dLzyr0icNl
c0iJBO6It662Q9FnNemiGgv2PLYbjjDC/CP82QWoWrSzPpDKu0DgrF6+MPQgRleT
Z8g0+nLHYMgCTAPfwaaYHLvLLaOt0Ju7L2kt9TSKn2aU2NJReVC5mm3Jxg6Bz3Ae
iQ6iigKI7R/huVDVzBuJQNiToRQswbb/PidYdwyiI3GJC6m+q8HEWAl+cGahqUVg
IIDlmj6SeIsP0r+SGygX0PRWyW/NmQPWamG5e4TDL0pZpu1CzGqfN/A2KLXVO6ss
X2l2HWadBQgd0goFNX7PK210I5B8SfiJM5+cLgChcT9g3mKil8XUkKTSE+c0Q9e5
1AkUrUS69KZeqqtJOB350YP9ZGY7TbOXjXp0Y0/fo3/X+0sZzmP8jIyozLAecM0e
fPrIeA2mego6nWaSRp5FH8KlvHpUvFcKNV+SVSbrbzmSVSpgKmQRp2kd+tyDOPrt
2tXIoWMYdVtlluSYqR2lPv7WFyxCPX8DmxK6fYeVoqBf+7g4GZPdcSviDBlJEJfc
HAZZKsSZzgMugwqLidQ+W53eIDyIOVw0tvcHDJ1S5mpWqvROf7gfNXXFvLjECACN
wnqdjeFGPLlP5Q6tVPvp8j7prVlvZQ==
=xm3N
-----END PGP PUBLIC KEY BLOCK-----
Project:
Philly Mesh - http://mesh.philly2600.net - #phillymesh:tomesh.net
Plan:
%=============================================%
==2017-01-26===================================
%=============================================%
+ Installed fingerd

* Configuring SILC network
* Documentation for fingerd and silcd

By default, finger can display login and contact information for all accounts on a machine. Luckily, accounts can be individually configured so that finger will ignore their existence if there is a .nofinger file in their home directories:

sudo touch /home/someotheraccount/.nofinger && chmod o+rx /home/someotheraccount/.nofinger

 
Conclusion

You should now have finger and fingerd installed and configured on your server for each user to make use of. Keep in mind that the information you enter here will be public (provided the server is) and people around the world may be able to gleam you contact information or even last login time via the finger command.
 
Sources

 

The Best of 2016

See the 2015 post here!

Here is my second installment of the best things I’ve found, learned, read, etc. These things are listed in no particular order, and may not necessarily be new.

This annual “Best Of” series is inspired by @fogus and his blog, Send More Paramedics.

Favorite Blog Posts Read

Articles I’ve Written for Other Publications

I’ve continued to write for a few different outlets, and still find it a lot of fun. Here is the total list for 2016.

Favorite Technical Books Read

I haven’t read as much this year as previously

  • The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary – Really cool book about early community software development practices (at least that’s what I got out of it). Also covers some interesting history on the start of time-sharing systems and move to open-source platforms.
  • Computer Lib – An absolute classic, the original how-to book for a new computer user, written by Ted Nelson. I managed to track down a copy for a *reasonable* price and read the Computer Lib portion. Still need to get through Dream Machines.

Favorite Non-Technical Books Read

Number of Books Read

5.5

Favorite Music Discovered

Favorite Television Shows

Black Mirror (2011), Game of Thrones (2011) , Westworld (2016)

Programming Languages Used for Work/Personal

Java, JavaScript, Python, Perl, Objective-C.

Programming Languages I Want To Use Next Year

  • Common Lisp – A “generalized” Lisp dialect.
  • Clojure – A Lisp dialect that runs on the Java Virtual Machine
  • Go – Really interested to see how this scales with concurrent network programming.
  • Crystal – Speedy like go, pretty syntax.

Still Need to Read

Dream Machines, Literary Machines, Design Patterns, 10 PRINT CHR$(205.5+RND(1)); : GOTO 10

Life Events of 2016

  • Got married.
  • Became a homeowner.

Life Changing Technologies Discovered

  • Amazon Echo – Not revolutionary, but has a lot of potential to change the way people interact with computers more so than Siri or Google Now. The fact that I can keep this appliance around and work with it hands free gives me a taste of how we may interact with the majority of our devices within the next decade.
  • IPFS – A distributed peer-to-peer hypermedia protocol. May one day replace torrents, but for now it is fun to play with.
  • Matrix – A distributed communication platform, works really well as an IRC bridge or replacement. Really interested to see where it will go. Anyone can set up a federated homeserver and join the network.

Favorite Subreddits

/r/cyberpunk, /r/sysadmin, /r/darknetplan

Completed in 2016

Plans for 2017

  • Write for stuff I’ve written for already (NODE, Lunchmeat, Exolymph, 2600)
  • Write for new stuff (Neon Dystopia, Active Wirehead, ???, [your project here])
  • Set up a public OpenNIC tier 2 server.
  • Participate in more public server projects (ntp pool, dn42, etc.)
  • Continue work for Philly Mesh.
  • Do some FPGA projects to get more in-depth with hardware.
  • Organization, organization, organization!
  • Documentation.
  • Reboot Raunchy Taco IRC.

See you in 2017!

 

Automating Site Backups with Amazon S3 and PHP

This article was originally written for and published at TechOats on June 24th, 2015. It has been posted here for safe keeping.

BackItUpWithBHPandS3

I host quite a few websites. Not a lot, but enough that the thought of manually backing them up at any regular interval fills me with dread. If you’re going to do something more than three times, it is worth the effort of scripting it. A while back I got a free trial of Amazon’s Web Services, and decided to give S3 a try. Amazon S3 (standing for Simple Storage Service) allows users to store data, and pay only for the space used as opposed to a flat rate for an arbitrary amount of disk space. S3 is also scalable; you never have to worry about running out of a storage allotment, you get more space automatically.

S3 also has a web services interface, making it an ideal tool for system administrators who want to set it and forget it in an environment they are already comfortable with. As a Linux user, there were a myriad of tools out there already for integrating with S3, and I was able to find one to aide my with my simple backup automation.

First things first, I run my web stack on a CentOS installation. Different Linux distributions may have slightly different utilities (such as package managers), so these instructions may differ on your system. If you see anything along the way that isn’t exactly how you have things set up, take the time and research how to adapt the processes I have outlined.

In Amazon S3, before you back up anything, you need to create a bucket. A bucket is simply a container that you use to store data objects within S3. After logging into the Amazon Web Services Console, you can configure it using the S3 panel and create a new bucket using the button provided. Buckets can have different price points, naming conventions, or physical locations around the world. It is best to read the documentation provided through Amazon to figure out what works best for you, and then create your bucket. For our purposes, any bucket you can create is treated the same and shouldn’t cause any problems depending on what configuration you wish to proceed with.

After I created my bucket, I stumbled across a tool called s3cmd which allows me to interface directly with my bucket within S3.

To install s3cmd, it was as easy as bringing up my console and entering:

sudo yum install s3cmd

The application will install, easy as that.

Now, we need a secret key and an access key from AWS. To get this, visit https://console.aws.amazon.com/iam/home#security_credential and click the plus icon next to Access Keys (Access Key ID and Secret Access Key). Now, you can click the button that states Create New Access Key to generate your keys. They should display in a pop-up on the page. Leave this pop-up open for the time being.

Back to your console, we need to edit s3cmd’s configuration file using your text editor of choice, located in your user’s home directory:

nano ~/.s3cfg

The file you are editing (.s3cfg) needs both the access key and the secret key from that pop-up you saw earlier on the AWS site. Edit the lines beginning with:

access_key = XXXXXXXXXXXX
secret_key = XXXXXXXXXXXX

Replacing each string of “XXXXXXXXXXXX” with your respective access and secret keys from earlier. Then, save the file (CTRL+X in nano, if you are using it).

Now we are ready to write the script to do the backups. For the sake of playing different languages, I chose to write my script using PHP. You could accomplish the same behavior using Python, Bash, Perl, or other languages, though the syntax will differ substantially. First, our script needs a home, so I created a backup directory to house the script and any local backup files I create within my home directory. Then, I changed into that directory and started editing my script using the commands below:

mkdir backup
cd backup/
nano backup.php

Now, we’re going to add some code to our script. I’ll show an example for backing up one site, though you can easily duplicate and modify the code for multiple site backups. Let’s take things a few lines at a time. The first line starts the file. Anything after <?php is recognized as PHP code. The second line sets our time zone. You should use the time zone of your server’s location. It’ll help us in the next few steps.

<?php
date_default_timezone_set('America/New_York');

So now we dump our site’s database by executing the command mysqldump through PHP. If you don’t run MySQL, you’ll have to modify this line to use your database solution. Replace the username, password, and database name on this line as well. This will allow you to successfully backup the database and timestamp it for reference. The following line will archive and compress your database dump using gzip compression. Feel free to use your favorite compression in place of gzip. The last line will delete the original .sql file using PHP’s unlink, since we only need the compressed one.

exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");

The next line will archive and gzip your site’s web directory. Make sure you check the directory path for your site, you need to know where the site lives on your server.

exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");

Now, an optional line. I didn’t want to keep any web directory backups older than three months. This will delete all web directory backups older than that. You can also duplicate and modify this line to remove the database archives, but mine don’t take up too much space, so I keep them around for easy access.

@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");

Now the fun part. These commands will push the backups of your database and web directory to your S3 bucket. Be sure to replace U62 with your bucket name.

exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");

Finally, end the file, closing that initial <?php tag.

?>

Here it is all put together (in only ten lines!):

<?php
date_default_timezone_set('America/New_York');
exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");
@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");
exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");
?>

Okay, now our script is finalized. You should now save it and run it with the command below in your console to test it out!

php backup.php

Provided you edited all the paths and values properly, your script should push the two files to S3! Give yourself a pat on the back, but don’t celebrate just yet. We don’t want to have to run this on demand every time we want a backup. Luckily we can automate the backup process. Back in your console, run the following command:

crontab -e

This will load up your crontab, allowing you to add jobs to Cron: a time based scheduler. The syntax of Cron commands is out of the scope of this article, but the information is abundant online. You can add the line below to your crontab (pay attention to edit the path of your script) and save it so it will run on the first of every month.

0 0 1 * * /usr/bin/php /home/famicoman/backup/backup.php

… 

 

Mining Bitcoin for Fun and (Basically No) Profit, Part 2: The Project

If you have not yet, please read the first article in this series: Mining Bitcoin for Fun and (Basically No) Profit, Part 1: Introduction

I have a few Raspberry Pis around my house that I like to play with – four in total. Prior to this idea of a Bitcoin project, I had one running as a media center and another operating as a PBX. Of the remaining two, one was an early model B with 256MB of RAM while the other was the shiny new revision sporting 512MB. I wanted to save the revised model for the possibility of a MAME project, so I decided to put the other, older one to work. It would help me on my quest to mine bitcoins.

Raspberry Pi Model B

Raspberry Pi Model B

But what is mining exactly you may ask? Bitcoin works on a system of verified transactions achieved through a distributed consensus (a mouthful I know). Every transaction is kept as a record on the Bitcoin block chain. Mining is more or less the process of verifying a “block” of these transactions and appending them to the chain. These blocks have to fit strict cryptographic rules before they can be appended, else blocks could be modified and invalidated. When someone properly generates one of these blocks, the system pays them in a certain amount of bitcoins (currently 25). This process repeats every ten minutes.

I knew that at this stage in the mining game I had to go with an ASIC setup and I new I wanted to run it off of my Raspberry Pi. Simple enough. The Raspberry Pi is a fantastic platform for this considering its price, power consumption, and horsepower. For mining hardware, I decided to buy the cheapest ASIC miners I could get my hands on. I found the ASICminer USB Block Eruptor Sapphire for the low price of $45 on Amazon. They cost more money on eBay and I couldn’t buy them from any sellers with Bitcoin because I didn’t have any (and didn’t want to bother with exchanges) so this seemed like the way to go. The Block Eruptor could run at ~330MHash/s, which is pretty hefty compared to GPU mining and at a fraction of the price. It is also pretty low power, using only 2.5 watts.

ASICminer USB Block Eruptor

ASICminer USB Block Eruptor

So I figured that I would get one of those, but also devised a more complete and formal parts list:

  • 1 x Raspberry Pi
  • 1 x ~4GB SD Card
  • 1 x Micro USB Cable
  • 1 x Network Cable
  • 1 x Powered USB HUB
  • n x USB Block Eruptor

That’s the basics of it. I already had the Raspberry Pi, and the necessary cables and SD card. These were just lying around. I needed to purchase a USB hub, so I bought a 7-port model for about $20. The hub needs to be powered as it will be running both the Raspberry Pi and the USB Block Eruptor. Considering power, the Raspberry Pi claims to draw somewhere around 1-1.2 Amps maximum while the USB Block Eruptor claims to draw 500 milliAmps maximum. I tested things out using my Kill A Watt and found that my setup with the USB hub, Raspberry Pi, and three USB Block Eruptors draws only 170 milliAmps and uses only 12.5 watts! So the projected power usage seems off for me, but I can’t guarantee the same results for you.

After buying my USB Block Eruptor on Amazon, I got it in about a week. The day after I got it and made sure it was working, I ordered two additional units to fill out the hub a little more.

Software

To do anything with Bitcoin, we’re first going to need a wallet and a Bitcoin address. Which wallet software you use is up to you. For desktop apps, there is the original Bitcoin-qt, MultiBit, and other third-party wallets. There are mobile applications like Bitcoin Wallet for Android, and even web-based wallets like BlockChain that store your bitcoins online. Figure out what client works best for you and use it to generate your Bitcoin address. The address is a series of alphanumeric characters that act as a public key for anyone to send bitcoins to. This address is also linked to a private key, not meant to be distributed, which allows the address holder to transfer funds.

In order to use the USB Block Eruptor, we’re going to need mining software. One great thing about using the Raspberry Pi as a platform is that someone has already made a Bitcoin mining operating system called MinePeon, built on Arch Linux. The distribution combines existing mining packages cgminer and BFGminer with a web-based GUI and some nice statistical elements. You’re going to need to download this.

To copy the operating system image file onto the SD card for your Raspberry Pi, insert the card into your computer and format it with the application of your choosing. Since I did this with a Windows system, I used Win32 Disk Imager. It is fairly straight forward: choose the image file, choose the drive letter, hit write, and you’re done.

Win32 Disk Imager

Win32 Disk Imager

Okay, software is ready. Now to set up the Raspberry Pi, insert the SD card into the Pi’s slot. To power the Pi, plug your Raspberry Pi into one of the USB ports via Micro USB cable. Then, plug the USB hub into one of your Raspberry Pi’s free USB ports. Any USB Block Eruptors you have can be plugged into the remaining USB ports on the hub (not the Raspberry Pi directly), but keep heat flow in mind as they get pretty hot. Next, connect the Raspberry Pi to your home network. Finally, plug your hub’s power cord in and let the Pi boot up.

To go any further, you will need to determine your Raspberry Pi’s IP address. If your router allows for it, the easiest way is to log in to it and look for the new device on your network list. Alternatively, plug a monitor or television into your Raspberry Pi and log directly into the system using minepeon as the user and peon as the password. From there, run the ifconfig command to retrieve the internal IP address.

Navigate to MinePeon’s web interface by typing the IP address in your browser. You’ll have to log in to the web interface using the previously defined credentials: minepeon as the user and peon as the password. You should be presented with a screen similar to this (But without the graphs filled in):

MinePeon Home Screen

MinePeon Status Screen

If you receive an in-line error about the graphs not being found, don’t worry. You just need to get mining and they will generate automatically, making the message go away.

Configuration

In order to utilize the software, you will now need to register with one or more Bitcoin mining pools. Mining pools work using distribution. The pool you are connected to will track the progress of each user’s attempt at solving a block for the block chain. On proof of an attempt at solving the next block, the user is awarded a share. At the end of the round, any winnings are divided among users based on how much power (how many shares) they contribute. Why use a mining pool at all? Payout usually only happens for one user when a block is solved. It can be very difficult for a user to mine a bitcoin, even after months of trying as the odds of success are always the same. Mining independently offers the opportunity of a giant payout at some point, but pooled mining offers smaller, more regular payouts.

Mining pools can differ greatly in how the payout is divided. I’d advise that you do some research as to which method works best for you. Alternatively, you could go about setting up your own mining pool, but I wouldn’t advise it without a substantial amount of processing power unless you’re willing to wait for a payout (if it even comes at all).

Anyway, I chose two mining pools: Slush’s pool (my primary) and BTC Guild (my fail-over in case my primary is down).

Registering with a mining pool is as simple as registering with any other website. After completing registration, you will be supplied with a worker name/password and the server address. These credentials can then be pushed into the MinePeon Pools configuration like so:

MinePeon Pool Settings

MinePeon Mining Pools

Next go to the Settings page and change your password for the MinePeon web interface (it’s a good idea), the timezone (this is buggy right now and won’t look like it’s working on the settings page) and any time you want to donate to the MinePeon maintainer (if any).

MinePeon Settings

MinePeon Settings

If everything is configured correctly, within a few hours (or instantly), you should see some activity on your MinePeon Status screen. Additionally, be sure to check your account on the mining pool you signed up with to make sure everything is working as expected.

My MinePeon Pool & Device Status

My MinePeon Pool & Device Status

My Slush's Pool Worker Status

My Slush’s Pool Worker Status

Now, just sit back and let your machine go to town. The only thing you have to do at this point is make sure the USB hub continues to get power (don’t let anyone unplug it) and it should run continuously. On your first day or two, it may take a while before your status update and payout will take even longer.

My mining rig, hub side

My mining rig, hub side

My mining rig, Raspberry Pi side

My mining rig, Raspberry Pi side

Most mining pools offer status tracking for your payout, so you should be able to see how things are progressing fairly quickly. As of this article being published, I receive a payout of 0.01 Bitcoin near every 24-30 hours while running three USB Block Eruptors.

 

Pogo Unplugged

I got a little restless while waiting for my Raspberry Pi to get here so I decided to mess around with a few other SOCs while I wait. I was drawn to the the concept of these “plug” devices that were something of a flash in the pan a few years ago. Do you remember the SheevaPlug or the GuruPlug? Anyway, for the absent minded or unacquainted, plug computers are tiny tiny servers that run on very low power. Why keep that bulky server around when you can plug a little box into the wall, tuck it away, and forget about it for a while?

Love them or hate them, plug computers are cool little pieces of technology that are highly hackable and a good way to spend an afternoon playing with. I decided to get a Pogoplug to mess around with, as they were inexpensive when compared to some of the other plugs. They’re not the beefiest machines, but they’re not meant to be. I found a coupon that allowed me to get one at 70% off, so I took the gamble. What I got was a little NAS (which wasn’t very good at its job) with a dual core 700MHz processor and 128MB of RAM. You have to go through the hassle of registering your plug with their website to enable ssh access, but it only took five minutes and afterwards you can completely open up the hardware. With a 2GB flash drive and 20 minutes of my time, I installed Arch on my plug and then further configured it with web and IRC server software. Not bad for a cheap little box.

Pogoplug v3 Running htop

I liked the simplicity of hacking the plug and the potential it offered, so I went ahead and ordered two more.

Though the boxes have identical model numbers, the Pogoplugs I got in my second order were not the same. Apparently, my first plug was a v3, while my second two were v2s. What does that mean exactly? Pogoplugs use ARM processors. The v3 plug uses an ARMv6 while the v2 plugs use an ARMv5. So, slightly older processors but the v2s also happen to run at 1.2GHz and have 265MB of RAM. Can’t complain there.

My Pogoplugs

I found an old guide for installing Debian on a Pogoplug, and though it did not work, I followed a link to the site’s forum and was able to talk with someone to help get me going. After a painless installation, I had a second hacked Pogoplug to keep my company. Now what about the third one? Haven’t cracked into this one yet, but I have plans to try my luck with Fedora just for the variety.

What am I going to do with all these? I don’t really know. An IRC network made from just Pogoplugs sounds like fun but is completely impracticable. Any ideas?

 

Debian

So a few nights ago (I don’t remember how many because my sleep schedule is so messed up) I installed NetBSD on an old box I had lying around. This box is a P2 running at maybe 266mhz with 64mb of ram; its been a while since I’ve seen the specs, but these are roughly them. This was the second computer I have ever purchased, and has gone through so many operating systems, I can’t believe the hard disk still spins up. Win98 to Ubuntu to Debian to CentOS to- well, you get the idea.

So I got the impulse to install something new on it. I mean, I completely ignored it for a year or two, and I couldn’t remember any passwords, so an install was logical, but normally I would have just flipped the switch and forgot about it for longer. Something grabbed me. I chose to go with NetBSD because I had used flavors of Linux and Windows before, but had no experience at all with a BSD environment. So the install seems to go well, it boots, I can login, but after a while, I thought it to be lacking. Maybe a bit over my head. There was no help command, it didn’t seem to network itself; I was lost. So instead of continuing with this install, I went over it with a familiar face: Debian.

I had installed Debian for the first time maybe 2 years prior when I had absolutely zero experience. I was trying to make a web server, and needless to say, that project failed. I could always install NetBSD again later, but for now I wanted something I could use right out of the box.

The install goes smoothly, and everything settles in nicely. This time, I know the thing networked correctly because I used a bare bones net-install disc and it had to connect to the Debian ftp servers to get the additional software to accompany the core. So everything loads up, I apt-get a few more applications, screw around with making directories and whatnot, and admire my install.

After getting an sshd installed, I can now just SSH into the box from any computer on my LAN, and possibly anywhere on the internet if I ever decide to forward the port. So know, I have remote access from upstairs, and its as if I’m sitting right in front of box. The thing I really appreciate about this Debian install is being able to  run scripts, may they be python, perl, or my favorite: bash. This way, I can execute a whole new set of programs that I had previously been locked out of due to OS restrictions.

WOPR (Named after the Wargames movie computer) running a simple

The WOPR (Named after the Wargames computer) running a simple "Hello World" script.