Belated News: NODE VOL 01 and Presenting at Radical Networks 2019

I was quite busy in 2019 and worked on two big things that were never shared here. Both were fairly large undertakings for me, and I figured it was good to make note of them here.


I served as the editor for the premier issue of the NODE zine, a really cool publication to come out of NODE (which you may likely recognize from its video offering). Production of the first issue took many months, and while I had worked previously writing content for NODE, editing proved to be a new and different animal. That said, I also ended up writing a handful of articles for the issues, and am currently working towards production of a second! The zine is licensed under Creative Commons, and is available freely to download via the Dat network. A physical version was released but is currently sold out.

NODE VOL 01 is an 150 page zine for the NODE community. Volume 1 is packed with features on P2P projects, such as Dat, Beaker Browser, Ricochet IM, Aether, and more. There are many tutorials showing projects like the new NODE Mini Server, how to 3D print long range wifi antennas, how to chat via packet radio, and how to do things like Libreboot the Thinkpad X200. There’s also a handy open source directory at the back, along with lots more.

Radical Networks 2019 — BGP: The Internet’s Fragile Beast

In October of 2019, I had the pleasure of presenting at the Radical Networks conference in New York City. My talk was on Border Gateway Protocol, the sort of invisible glue that holds the Internet together. The talk is available for download in odp format as well as pdf!

BGP (Border Gateway Protocol) manages how all of our packets are routed across the Internet. It is one of the most powerful and important protocols currently deployed on the ‘net, but it is also incredibly fragile. Devised as a quick fix 30 years ago (without concern for security), BGP is constantly blamed in the news as Internet outages occur worldwide due to misconfigurations by multinational telecommunications conglomerates or hijackings by government actors.

This talk will demystify the misunderstood protocol that is BGP, and explain how entities exchange giant flows of data across the Internet, highlight past misuses, and consider what we may be able to expect in the future.

See you all in 2020!


Building a Replica Hackers Pager

Ever since I saw Hackers (1995), I always wanted one of the iconic yellow pagers that Cereal Killer sports at various points during the film. I missed out on the whole era of pagers, but I always thought there was just something cool about them that seems a little less amazing now that we are in a text-messaging world.

The Motorola Advisor from the film.

Many months ago, I became aware of an awesome website called Hackers Curator that attempts to index every prop (among other great things) from the Hackers film, and even make some reproductions. Of course, they showcased the iconic Motorola Advisor pager, and even gave a custom-made replica away to an online buddy of mine via a scavenger hunt contest. I inquired to see if they had any for sale, and they did, but a single pager from them was outside of my price range. I thought I could so something similar for significantly less money — and it turns out that I can (you can too)!

I in no way want to sound as though I am disparaging Hackers Curator. I think they do a really good job, and I’ve even contributed a few scans to their site. If you don’t like the idea of piecing together supplies to customize your own pager, don’t have a lot of free time, or just don’t like getting spray paint over your hands, I’d definitely recommend you send them an email to see if they have any pagers in stock. I’d also like make it known that they have a video on their YouTube channel that outlines how they made one of the pagers. I got a few ideas from their video, but ultimately used a few different techniques and hope to share my individual findings (and source files!) to create a more complete build solution guide for tinkerers out there.

Build List

  • Motorola Advisor pager – $10+
  • Krylon Fluorescent Yellow spray paint – $4
  • Krylon All Purpose Bonding White Primer spray paint – $4
  • Masking tape – $1
  • Fine grit sand paper – $1
  • X-Acto Knife (or other precision cutting tool like a razor blade or box cutter- $1-$5
  • Scrap cardboard (to put the pager body on for painting) – Free
  • 5x Sheets waterslide decal paper (and a printer to print on it with) – $4-$8
  • 1x Sheet metallic-gold paper – $1
  • Motorola sticker (optional) – $3

The heart of this project is of course the Motorola Advisor pager. Technically, there are two different versions of the original Motorola Advisor, and the difference comes down to the arrow buttons having triangles inset into the rubber or just printed right on top. Cosmetically, this doesn’t seem to make much of a difference, but if you want to be accurate to the movie I believe the pager they use has the inset triangles. Also keep in mind there have been many Motorola pagers in the Advisor line, like the Advisor II, Advisor Gold, Advisor Elite, etc. I may have made some of those up, but it’s hard to tell when they have names like that. You just want the original blocky one. I ended up just buying the most inexpensive one I could find on eBay, for $10 including shipping. The internals in mine appear to work, but if you are just making a prop, it likely doesn’t matter if the thing works at all. You may also notice that a lot of these pagers have some other company’s name in the front nameplate where “Motorola Advisor” should be. This is fairly common, so unless you happen to find a sticker that will fit over top of the weird company’s name, you might want to pay a little more for a pager that actually says “Motorola Advisor.”

You’ll want to get some spray paint to paint the pager with, and I recommend a basic white primer to cover up the black plastic entirely, and fluorescent yellow paint to match the color of the pager in the film. For whatever reason this paint has awful reviews online, but works great and even glows under black light! More on that later. Aside from the paint, you will want some basic supplies like masking tape (to tape off areas on the pager you don’t want paint on), an X-Acto (or other precision cutting tool to slice of excess masking tape), some fine grit sand paper (to sand down some paint during finishing to make the pager look worn), and scrap cardboard (or wood, etc. to place the pager body on for painting). For these supplies, I used stuff I had around, which included 100 grit sandpaper that I probably should not have used as it was too low grit. You may want to get a variety of sandpaper and work your way down the grit levels. Lastly, before I forget, unless you have long fingernails, you are going to want some sort of pry tool like a small jewelers screwdriver or guitar pick (which is always good to keep in the tool box).

The last important items you will need to get are waterslide decal paper and metallic-gold craft paper. Waterslide paper allows you to print directly onto a paper-backed transparent plastic film that you will later apply to the pager’s screen (from the back). They make different types for laser and inkjet printers, so be sure to buy the proper type for the printer you have. I bought a pack of five sheets so I had some extras if I messed up or wanted to do a slightly different design at some point. The metallic-gold craft paper is easy to find in a giant sheet at any craft store, just inspect it before you buy it as some sheets looked streaky. We will use this gold paper as a backing for our waterslide paper.


Okay, so now we have our pager.

The majestic Motorola Advisor!

Flip it over and remove the battery cover. It should slide out from top to bottom.

Battery cover removed.

Next, we need to open it up. If we flip the pager onto its side, we can locate the locking plastic tab keeping it together. These pagers have a tool-less assembly, so we can pry up this piece of plastic by slipping a fingernail or a piece of plastic (okay, or a jewelers screwdriver) into the crease closest to the corner (shown at the right of the picture here) and sliding the cover to the right, towards the pried-up end.

The plastic cover should slide right out when you get it to this point.

At this point, the pager should basically break down into its components, which we can easily reassemble later. If you ever find a part that seems to be held in by adhesive (like a side of the screen), you can safely wiggle this loose using a small screwdriver and mild pressure. The actual LCD screen is attached to a separate plastic case piece through three plastic tabs that can be released (again) with a small screwdriver or prying device of some kind.

Depressing the tabs to release the screen.

Now, everything should be completely broken down.

All of the components separated.


Before we can actually do some painting, we need to tape off the areas that we don’t want any paint to get on. This includes, the screen, the name plate, plastic parts on the side, labels, or pretty much anything that isn’t black plastic. Apply tape liberally and use the precision knife to gently cut away excess.

Taping off the nameplate.

Make sure to also tape off components or clear plastics from the underside of the case as well! You don’t want back-spray to leave any paint flecks here. Also, I didn’t do this, but try to tape off the back of the locking plastic tab and corresponding parts of the case that the tab normally covers. This will make assembly and disassembly easier in the future if you want to get back inside the pager, the layers of paint can make it really hard to slide the tab out again!

Ready to go! There should be 5 pieces to paint.

The primer we have is designed to bond to plastic, so we should be good to go with a first coat. You might want to clean the pager’s shell with alcohol or maybe do some sanding here, but I didn’t find that necessary. Place the case pieces on some cardboard and paint them following the directions on the can. When done, follow the drying instructions as well. Two coats should cover the case completely.

Primer done!

Next up, the yellow paint! Again, follow the painting and drying instructions on the can. For this, I ended up doing three coats total, but two might be good enough.

Fluorescent yellow looking good!

We can now carefully remove the masking tape.

Tape removed.

At this point, we can start sanding down the edges of the pager to remove some layers of paint. Remember to work applying light pressure, as you can always take more paint away but not get any back. It helps if you keep a screenshot from the film nearby when working on your wear pattern.

After some sanding, we’re looking pretty good.

Preparing the Decal

The coolest part of this pager is going to be mimicking the display of the pager in the movie so it reads “GRAND CENTRAL HACK THE PLANET”. To achieve this, I had to combine a few different things.

First, I wanted make a canvas for the screen, so I made a Photoshop document sized at 2.628 inches by 0.872 inches (a little larger than the screen size) with a resolution of 250 pixels/inch.

Then, I wanted to work on the text. Instead of making the typeface from scratch, I found an almost identical typeface called LCD Solid, which is freely available. I was able to create two lines of text, and adjust the kerning so the characters were spaced out more like in the film.

Next, I used a screenshot from the film to draw the little display icons by tracing over them in the screenshot.  I ended up modifying them a bit to level them out and generally make them look a bit more flat. Ultimately, I was able to get a pretty close representation of the screen shown in the movie.

My completed screen.

You can download my finished PDF here for free. Please use it, and modify it, and make it better for other hackers to use!

The next step was to print it out on standard white computer paper, cut it down, and do a fit test to make sure it would look okay and not be cut off when it was printed on plastic for the final product.

Just holding a cut piece of white paper with the printed image shows how well it will fit.

Everything looked good, so now we can move on to printing on the waterslide decal paper. Our waterslide paper is clear plastic backed by white paper. After we print out our image on the plastic side, the paper is soaked in water and the backing slides off, leaving a “sticky” side we will affix to the back of our pager screen. Because of this, we will need to flip our newly created image horizontally before printing on the waterslide paper. Additionally, I copied and pasted the image many times to fill out the sheet of paper in the event that the application didn’t work or came out poorly. It is a good idea to do this to give several attempts as waterslide paper can be a bit tricky.

A big sheet of waterslide paper with the image printed all over it.

Now, we can cut away one of the decals and make sure it fits the space of the screen. Rough measuring can be helpful here.

Decal ready for application.

Follow the instructions included with waterslide paper to remove the backing. Generally, you will place the decal in a bowl of warm (not hot) water for 30 seconds then remove it. Flatten the decal out and line it up on the backside of the pager screen (text facing you). With your finder holding down the long edge of the decal, slowly work the backing up, away from your finger until it is completely removed. Use a cloth or your finger with light pressure to smooth out any wrinkles or air bubbles between the decal and the screen. Do not use a credit card or your fingernail if suggested by the waterslide paper instructions, this will scratch away some of the ink on the decal and leave it splotchy. If the decal doesn’t look good, don’t be afraid to start over. It can take a few tries to get the desired result.

Here is the applied decal posed next to a cropped screenshot of the pager from the film.

At this point, I assembled the unit, but was very dissatisfied by the gutter shadow between the screen and the display. Also, the display somehow had a ton of scratches that were not on the screen.

Look at that shadow!

You can also see that I applied my Motorola sticker to the nameplate at this point to make the pager look a little more stock. I could only find a “Motorola OPTRX” sticker for sale on eBay, so I used a Sharpie to black out the “OPTRX” text.

Here is the sticker before application.

But anyway, we want to eliminate that shadow. This is where the metallic-gold craft paper comes in. Cut a piece roughly the size of the screen, and place it between the screen and the display. No tape or glue is needed to secure it in place, it just stays in from friction. This is not only cheaper than spraying the area with gold paint, but it also makes it easier to change out the decal or reverse the whole modification so the original pager display can be used for any reason.

The completed pager.

One of my favorite properties of the fluorescent yellow paint is its ability to glow under black light.

The pager body pops under UV light.

Also, it looks pretty good in the holster.

Ready to be clipped on to a belt.


That finishes up the Hackers pager. There is a bit of room for improvement, but I’m really satisfied by the result. To see some of my progress posts and to see what others are doing, be sure to check the #hackerspager tag on Mastodon. In total this build cost me a bit less than $30.

Aside from showing this pager off at cons, I hope to one day look into modifying it to run POCSAG so it will act as an actual pager and not just a show piece. That’s definitely further down the line, however.

This guide is organic, and subject to change. Let me know if you attempt it, how it works for you, and if you successfully make a cool pager by using it! Don’t hesitate to reach out.

Hack the planet!


Emulating a z/OS Mainframe with Hercules

Note: I started writing this article back in 2015 and hit a few roadblocks that I’ve been able to finally reconcile in the last few months. There are a lot of similar guides out there (which I will reference in my sources), but I found them to be too ambiguous to be completely helpful. While I’ve learned a lot from writing this and troubleshooting the issues from existing guides, I am still far from a mainframe expert. There may be errors here, or things I could have accomplished in a better, more “proper” way. That said, I ultimately have a usable z/OS system up and running, and I hope I can help you have the same 🙂


I recently became aware of the fact that mainframes are still alive and well in the corporate world. But why? Why not just use supercomputers? Mainframes aim to perform a high number of instructions per second, usually measured in the millions. If you hear someone talking about millions of instructions per second (MIPS), they’re probably measuring mainframe throughput. Supercomuters on the other hand aim to have a high number of floating-point operations per second (FLOPS). The difference is that mainframes usually deal with information processing in a short window while supercomuters usually deal with simulations requiring a lot of floating-point arithmetic. A supercomputer might be more suited to weather calculations on Jupiter, but a mainframe is still a better candidate for processing a lot of transactions like you might find in banking or airline booking systems.

Okay, but why not use some sort of content distribution network or cloud computing? For years, mainframes have been touted as the go-to for mission critical processing, with minimal downtime. While cloud computing is catching up in this regard, it can be argued that mainframes are still unrivaled when considering their efficiency and maintainability. One mainframe may be able to process a chunk of data more efficiently than thousands of linked machines in remote locations. Now, consider maintenance. Would you rather update one machine or thousands? And scalability? Many cloud providers supply controls to ramp up power when needed (such as during the holidays) or dial it back during more sleepy periods. Mainframes offer the same sort of control, and can easily scale up or down as needed without someone (or piece of software) needing to roll out or switch off a few hundred more servers.

Mainframes are an interesting piece of technology that still have a purpose, but they rarely discussed these days with the influx of new technologies in processing. It’s easy to try these services out, even for an amateur, but getting your hands on a mainframe is incredibly difficult in comparison. Even if you happened to be employed at a company still utilizing one, you would need training and shadowing sessions before even having the chance to touch a keyboard on a production machine.

Of course, there are ways to explore these systems without needing a physical unit, and that is what I’m going to get into momentarily. It is now possible to get your own taste of Big Iron right from your personal computer.


Before we get into installing Hercules, an IBM mainframe emulator, you are going to need to find an image of z/OS. z/OS is the operating system of choice for modern IBM mainframes, but it is a little hard to get your hands on unless you actually have a full-scale system set up somewhere already. There are images of z/OS floating around the Internet that can be found, specifically version 1.10. I will not be sharing where these files can be found, and if you do find them, make sure you adhere to the software license while running z/OS.

Now, we also need a host system to support the Hercules emulator. While Hercules will run in Linux, Windows, and OSX, this guide will use a machine running Linux, specifically Debina 9 (Stretch). I will assume that you already have a system running Debian (or similar) and a non-root, sudo user with access to the z/OS files.

After all of this is set up, we can begin installation!

Configuring Hercules and c3270

First, we need to install some basic utilities and applications. But, one of them (c3270) is not available right away as it is classified as “non-free” software under Debian. You can still install packages like this, you just need to configure your system to do so. We need to edit the sources.list file to allow non-free packages.

Simply add non-free to the end of the stretch and stretch-updates sources by editing /etc/apt/sources.list with your favorite text editor:

$ sudo nano /etc/apt/sources.list

After editing, it should look like this:

$ cat /etc/apt/sources.list

deb stretch main non-free
deb-src stretch main non-free

deb stretch/updates main
deb-src stretch/updates main

# stretch-updates, previously known as 'volatile'
deb stretch-updates main non-free
deb-src stretch-updates main non-free

Now we are ready to install the packages we need. All of them can be installed by running the following command:

$ sudo apt-get install -y c3270 hercules

As this starts executing, go and put on a pot of coffee. As soon as you turn the machine on and walk back to your computer, this command will probably be through.

The above has installed hercules, our IBM system emulator as well as c3270, a IBM 3270-compatible terminal emulator that we will use to interface with our system.

Now, I’m going to assume you have the z/OS files somewhere on your Linux machine, possibly in a directory path like IBM\ ZOS\ 1.10/Z110SA/images/Z110\ -\ Copy. I will assume that the root IBM folder is in your home directory. We will reorder things by creating a directory MAINFRAME within the home directory to house the z/OS installation:

$ cd ~
$ mv IBM\ ZOS\ 1.10/Z110SA/images/Z110\ -\ Copy ~/MAINFRAME
$ mkdir PRTR

We will now have the following heirarchy:


At this point, we need to edit the config file that Hercules reads to boot our mainframe. You can open up the config file in your favorite text editor and follow along with the lines we will modify:


First, we need to edit lines 38/39/40 of the config to map to your PRTR, CONF, and DASD directories in your ~/MAINFRAME directory. We will be using full directory paths, so use your username in place of mine, famicoman.



Now, we edit networking information on line 115. We will need two unused IP addresses on our local network. We can get our machine’s current IP address using the ip command.

$ ip address show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fc:3f:db:09:60:59 brd ff:ff:ff:ff:ff:ff
inet brd scope global dynamic eno1
valid_lft 81093sec preferred_lft 81093sec
inet6 fe80::fe3f:dbff:fe09:6059/64 scope link
valid_lft forever preferred_lft forever

Our Debian machine is located at We can pick two additional addresses in the – range. and are currently unused so these will be chosen. will be something of a virtual gateway for the mainframe (think of this sort of like an address for Hercules itself, which we will use as our entry point) while will be an address for the z/OS machine. Keep in mind that will be exposed to your network independently of your host machine, creating a logically separate machine. This means you can access it with its own address, and create separate firewall rules, port forwarding, etc. as though it was physical machine on your LAN.

We will replace the content at line 115 in the config with the following to create a virtual adapter to handle networking with our chosen addresses:

0E20.2 3088 CTCI /dev/net/tun 1500

Lastly, we edit line 31. This line changes the default port for Hercules console connections (made by c3270) from 23 to something of your choosing. I will be using port 2323 as I may be using port 23 otherwise, and it is not a privileged port.


Now we can launch Hercules! (Do you smell your coffee percolating yet?)

I prefer to use screen sessions to keep thing organized (If you don’t have screen, install it with sudo apt-get install screen or just use tmux). This is also handy with using a virtual or remote host machine as you can keep the sessions going when not connected to the host. The below will place you in a new screen session where we will launch Hercules:

$ screen -S hercules

And now for the launch, specifying the config we edited earlier:


Hercules will begin to load (and give you a lot of logs). Then you will be presented with the Hercules console.

The Hercules console after launching. Note our tun0 device opening and our custom console port specified.

Now, we want to create a 3270 terminal session with Hercules. So, hold <CTRL> + A + D to detach your screen session, returning you to your original console window on the Debian host. Next, create a new screen session for our 3270 connection:

$ screen -S c3270

Now in our new screen session, we will launch c3270 to connect into Hercules, emulating a 3270 connection to actual hardware:

c3270 localhost 2323

You should be presented with a Hercules splash screen:

The Hercules splash screen.

Detach from your c3270 screen session and reattach to the hercules session. It might be a good idea to open a new terminal window on the host machine to keep multiple screen sessions open at once. I suggest two terminal windows, one with hercules and one with c3270. To reattach your hercules screen session, use the below command after detaching:

$ screen -r hercules

Now that you are presented with the Hercules console again, you should see your connection from the 3270 session in the logs.

HHCTE009I Client connected to 3270 device 0:0700

Booting z/OS

Now we can boot z/OS for the first time! In the Hercules console, type the following and hit <RETURN>:

ipl a80

z/OS will now boot. Your coffee should be done by now, so go grab a cup. I’ll wait.

Depending on the specs of the host machine, this could take a long, long time. The first boot took around 90 minutes for me, and could take even longer. At this point, you will get a lot of logging info in both the c3270 session and the hercules session. A lot of this looks like it could be reporting that something has gone horribly wrong, but don’t worry, it is likely okay. This is probably a good time to go for a walk outside with your coffee. Maybe take a good book and settle under a tree for a bit.

A Potential Boot Issue

I did run into the following message on my c3270 session at some point while attempting boot:


If this happens to you, you can safely type the following in the c3270 session and hit <RETURN>:

R 00,I

This will allow z/OS to continue booting.

This message in the 3270 console halted boot-up. Entering the provided command can resume system startup.

If you are unsure whether or not z/OS is fully booted (It can be hard to tell), the easiest thing to do is open another c3270 connection to localhost (maybe create a new screen session via screen -S terminal). If you get the Hercules splash screen again you can safely close the session (<CTRL> + ], then type “exit”), wait a little longer, and try connecting again. Eventually, your second terminal session should connect and get to the log-on screen for your z/OS installation.

Welcome to the DUZA system!

To log in, we enter “TSO” at the prompt. When prompted for a username, enter “IBMUSER”.

Login starts by asking for a USERID.

Then, enter “SYS1” as the password.

The password gets blanked out as you type it.

From here, press <RETURN>, then the ISPF menu will launch.

You will get some brief messages after logging in. Press the <RETURN> key to go to the ISPF menu.


The ISPF menu serves as a gateway to a lot of system functionality.

Now in the ISPF menu, type “3.4” to load the Data Set List Utility.

Replace “IBMUSER” in the “Dsname Level” field with “DUZA” and press <RETURN>.

We will use the Data Set List Utility to locate our network settings.

Scroll down using the <F8> key in the Data Sets list and locate the one called DUZA.TCPPARAMS. With your cursor, click on the ‘D’ in “DUZA.TCPARAMS” and use the left-arrow key to navigate three spaces to the left. Type the letter ‘E’ and hit <RETURN> to see items in this data set.

We need to edit the TCPPARAMS for the DUZA system.

On the next screen, use your cursor to click on the first position on the line to the left of the word “PROFILE”. Type the letter ‘E’ and hit <RETURN> to edit this item.

Finally, we can edit the Profile.

Use <F8> to page down to line 90:

000090 DEVICE LCS1 LCS E20
000093 HOME
000094 ETH1
000096 GATEWAY
000097 = ETH1 1500 HOST
000099 DEFAULTNET ETH1 1500 0
000109 START LCS1

Modify the lines so they look like the following with our IP addresses outlined earlier (and don’t forget line 109!):

000090 DEVICE CTCA1 CTC e20
000091 LINK CTC1 CTC 1 CTCA1
000093 HOME
000094 CTC1
000096 GATEWAY
000097 = CTC1 1492 HOST
000099 DEFAULTNET CTC1 1492 0
000109 START CTCA1

To save the updated config, place your cursor to the first underline character to the right of “Command ===>” and type “SAVE” followed by the <RETURN> key. Next, type “END” at the same location, again pressing the <RETURN> key.

Here is what the updated settings look like via the 3270 terminal:

Our updated networking is ready to save. Note the IP addresses we specified earlier when configuring Hercules.

Next, we need to recycle the TCPIP service on the system. Go back to your first c3270 console session (detaching your terminal session) and type “STOP TCPIP” followed by the <RETURN> key in the console.


Wait a minute or two and then type “START TCPIP” followed by the <RETURN> key. After both commands, you should see a lot of console output regarding the TCPIP service. After starting the service back up, wait a few minutes before proceeding to make sure everything has come back up.

After running START TCPIP.

After restarting the TCP service, we need to detach the session and do a few more things on our host machine.

Back on the Debian host machine we need to enable IPv4 forwarding and proxy arp with the following two commands to get networking sorted out:

$ sudo sh -c "echo '1' > /proc/sys/net/ipv4/conf/all/proxy_arp"
$ sudo sh -c "echo '1' > /proc/sys/net/ipv4/conf/all/forwarding"

Testing Networking

We can now test whether we can remote into our z/OS machine, and if we can get out from the inside. From the console on the host Debian machine, telnet to our mainframe using port 1023:

$ telnet 1023

Login with the credentials we used earlier (IBMUSER/SYS1) and try out a traceroute command:

Connected to
Escape character is '^]'.
Licensed Material - Property of IBM
5694-A01 Copyright IBM Corp. 1993, 2008
(C) Copyright Mortice Kern Systems, Inc., 1985, 1996.
(C) Copyright Software Development Group, University of Waterloo, 1989.

All Rights Reserved.

U.S. Government Users Restricted Rights -
Use,duplication or disclosure restricted by
GSA ADP Schedule Contract with IBM Corp.

IBM is a registered trademark of the IBM Corp.

IBMUSER:/u/ibmuser: >traceroute
CS V1R10: Traceroute to (
Enter ESC character plus C or c to interrupt
1 (  1 ms  1 ms  1 ms
2 (  70 ms  4 ms  3 ms
3 (  5 ms  6 ms  4 ms
4 (  10 ms (  8 ms (  10 ms
5 * * *
6 * * *
7 (  10 ms (  9 ms (  6 ms
8 (  16 ms  11 ms  11 ms
9 (  12 ms (  12 ms (  10 ms
10 (  10 ms (  15 ms (  12 ms
11 (  11 ms  19 ms  10 ms

You can additionally try out some more Unix commands:

IBMUSER:/u/ibmuser: >uptime
07:55PM  up 6 day(s), 03:54,  1 users,  load average: 0.00, 0.00, 0.00
IBMUSER:/u/ibmuser: >uname -a
OS/390 DUZA 20.00 03 7060
IBMUSER:/u/ibmuser: >whoami
IBMUSER:/u/ibmuser: >ls
CEEDUMP.20050812.162501.65568  ptest.c                        setup1
SimpleCopy.class               ptest.o                        setup2                ptestc                         setup3
hfsin                          ptestc.trc.16842781            zfs
hfsout                         setup

Back in your second 3270 connection (which like me you may have named terminal), you can keep entering”EXIT” in the “Command ===>” field until you return back to the ISPF menu we saw earlier.

There are many options from the ISPF menu. Take some time to explore them when you get a chance!

From here, you can enter “6” in the “Option ===>” field to get to the Command menu. From here, you can try out other various commands like ping or netstat by entering them into the “===>” field.

Here is the output of netstat. Notice how previously used commands are cached for you.

Shutting it Down

You always want to make sure to shut down your mainframe in the proper way. Otherwise, you may end up with corrupted data or an unbootable system!

From your first c3270 session, enter in “S SHUTSYS”.


Then after a little while enter in “Z EOD”.


Starting the shutdown process.

After a few minutes the machine will halt. Then, switch over to your Hercules console and enter in “exit” to close out Hercules.


Rebooting the mainframe follows the same start-up process from initial boot, so you can easily come back to things.


That’s it, you now have a functioning mainframe! Albeit, it will be much slower than a real mainframe on real hardware (emulation on my machine usually only clocks between 5-12 MIPS).

Toggle back and forth between the console and graphical view in Hercules with the <ESC> key.

Feel free to explore the system, and start learning how to use z/OS and customize your installation!



Bypass Your ISP’s DNS & Run A Private OpenNIC Server (2600 Article)

Now that the article has been printed in 2600 magazine, Volume 34, Issue 3 (2017-10-02), I’m able to republish it on the web. The article below is my submission to 2600 with some slight formatting changes and minor edits.

Bypass Your ISP’s DNS & Run A Private OpenNIC Server
By Mike Dank


With recent U.S. legislation regarding Internet privacy, we see another example of control moving away from consumers and towards service providers. Following the news of this change, many have taken a renewed interest in methods that can take back some of the control and privacy that ISPs and other organizations have slowly been chipping away.

One such service that consumers can liberate (and run) for themselves is DNS. The Domain Name System is responsible for retrieving IP addresses (like from domain names (like For a simplified explanation, when you go to visit a website your machine hasn’t seen before, your machine will query a caching server that is usually owned by your ISP or a company like Google or OpenDNS. This server will return the proper IP address, if they have it cached, or query its way along a chain of DNS servers to the authoritative one controlling that domain. Once found, the IP address for the domain entered will trickle back to you and complete the initial request, allowing your machine to resolve it.

Companies that control these services have a direct look into the sites you are trying to visit. You can bet that more than just a few of them are logging queries and using them for marketing purposes or creating profiles based on who is sitting behind the keyboard at the address of origin. However, there are alternative DNS providers out there who can offer more privacy than others are willing to supply.

One such project, OpenNIC, has been operating a network of DNS servers for many years. Unlike traditional DNS providers, OpenNIC provides an alternate root to the ICANN system (which resolves traditional TLDs, top level domains like .com, .net, etc.) while maintaining backwards compatibility with them. Using OpenNIC, you can still resolve all of the same sites, but also get access to those run by OpenNIC operators, with TLDs such as .geek, .pirate, and .bbs. OpenNIC is made up of hobbyists, engineers, and tinkerers who not only want to explore the ins and outs of DNS, but also offer enhanced privacy and free domain registration for TLDs within their root! You may see OpenNIC as just-another-organization to query, but many operators are privacy-oriented, running their own servers devoid of logging and/or in countries that don’t poke around in your network traffic.

Aside from using an official OpenNIC DNS server to query your home traffic against directly, you can also set one up yourself. Using a modest VPS (512MB of RAM, 4GB of disk) hosted somewhere outside of the US (or the 14-eyes jurisdiction, if you prefer), you can subvert organizations who may be nefariously gathering information from your queries. While your server will still ultimately connect upstream to an OpenNIC server, any clients at home or on the go never will — they will only directly query your new DNS server directly.

Installation & Configuration

Setting up a DNS server is relatively easy to do with just a basic understanding of the shell. I’m running a Debian system, so some of the configuration may be different depending on the distribution you are running. Additionally, the steps below are for configuring a BIND server. There are many different DNS server packages out there to choose from, though BIND is arguably the most widespread on GNU/Linux hosts.

After logging into our server we will first want to switch to the root account to configure BIND.

$ su -

Next, we will install bind9 and DNS utilities using the package manager. This will automatically configure a (non-publicly accessible) DNS server for us to work with and various DNS tools that will aid in setting up the server (specifically, dig).

$ apt-get install bind9 dnsutils -y

Now, we will pull down the OpenNIC root hints file for BIND to use. The root hints file simply contains information about OpenNIC’s root DNS servers that control the alternative TLDs OpenNIC has to offer (as well as provide backwards compatibility to ICANN domains). On Debian, we save this information to ‘/etc/bind/db.root’ for BIND to access.

$ dig . NS @ > /etc/bind/db.root

While the root hints information does not change often, new TLDs can be added to OpenNIC periodically. We will set up a cron job that updates this file once a month (you can specify this to be more frequent is you wish) at 12:00AM on the first of the month. Let’s edit the crontab to add this recurring job.

$ crontab -e

At the bottom of the file, paste the following and save, activating our job.

0 0 1 * * /usr/bin/dig . NS @ > /etc/bind/db.root

Next, we will want to make some changes to the BIND configuration files. Specifically, we will allow recursive queries (so our BIND installation can query the OpenNIC root servers), enable DNSSEC validation (to verify integrity of DNS data on query to OpenNIC servers), and whitelist our client’s IP address. Edit ‘/etc/bind/named.conf.options’ and replace the contents with the following options block, making any edits as needed to specify a client’s IP address.

options {        
    directory "/var/cache/bind";

    //Allow localhost and a client IP of        
    allow-query { localhost;; };        
    recursion yes;

    dnssec-enable yes;        
    dnssec-validation yes;        
    dnssec-lookaside auto;

    auth-nxdomain no;    # conform to RFC1035        
    listen-on-v6 { any; };  //Only use if your server has an ipv6 iface! 

Now, we will also change the logging configuration so that no logs are kept for any queries to our server. This is beneficial in that we know our own queries will never be logged on our server (as well as queries from anyone else we might authorize to use our server at a later date) for any reason. To make this change, edit ‘/etc/bind/named.conf’ and add the following logging block to the bottom of the file.

logging {
    category default { null; };

Finally, restart BIND so it can use our new configuration.

$ /etc/init.d/bind9 restart

Now, make sure that our server is using itself for DNS by checking the ‘/etc/resolv.conf’ file. If it doesn’t exist already, place the following line above any other lines starting with “nameserver”.


Testing resolution of both OpenNIC and ICANN TLDs can be done with a few simple ping commands.

$ ping -c4 
$ ping -c4 opennic.glue

Conclusion & Next Steps

Now that the server is in place, you are free to configure your client machine(s), home router, etc. to make use of the new DNS server. Provided you have port 53 open for both UDP and TCP on the server’s firewall, you should be able to add a similar ‘nameserver’ line to the ‘/etc/resolv.conf’ file (as seen in the previous section) on any authorized client machine, using the server’s external IP address instead of the loopback ‘’ address.
Instructions for DNS configuration on many different operating systems and devices are readily available from a myriad of sources online if you aren’t using a Linux-based client machine. Upon successful configuration, your client should be able to execute the two ping commands in the previous section, verifying a proper setup!

As always, be sure to take precautions and secure your server if you have not done so already. With a functioning DNS server now configured, this project could be expanded upon (as a follow-up exercise/article) by implementing a tool such as DNSCrypt to authenticate and secure your DNS traffic.



Building DIY Community Mesh Networks (2600 Article)

Now that the article has been printed in 2600 magazine, Volume 33, Issue 3 (2016-10-10), I’m able to republish it on the web. The article below is my submission to 2600 with some slight formatting changes for hyperlinks.

Building DIY Community Mesh Networks
By Mike Dank

Today, we are faced with issues regarding our access to the Internet, as well as our freedoms on it. As governmental bodies fight to gain more control and influence over the flow of our information, some choose to look for alternatives to the traditional Internet and build their own networks as they see fit. These community networks can pop up in dense urban areas, remote locations with limited Internet access, and everywhere in between.

Whether you are politically fueled by issues of net neutrality, privacy, and censorship, fed up with an oligarchy of Internet service providers, or just like tinkering with hardware, a wireless mesh network (or “meshnet”) can be an invaluable project to work on. Numerous groups and organizations have popped up all over the world, creating robust mesh networks and refining the technologies that make them possible. While the overall task of building a wireless mesh network for your community may seem daunting, it is easy to get started and scale up as needed.

What Are Mesh Networks?

Think about your existing home network. Most people have a centralized router with several devices hooked up to it. Each device communicates directly with the central router and relies on it to relay traffic to and from other devices. This is called a hub/spoke topology, and you’ll notice that it has a single point of failure. With a mesh topology, many different routers (referred to as nodes) relay traffic to one another on the path to the target machine. Nodes in this network can be set up ad-hoc; if one node goes down, traffic can easily be rerouted to another node. If new nodes come online, they can be seamlessly integrated into the network. In the wireless space, distant users can be connected together with the help of directional antennas and share network access. As more nodes join a network, service only improves as various gaps are filled in and connections are made more redundant. Ultimately, a network is created that is both decentralized and distributed. There is no single point of failure, making it difficult to shut down.

When creating mesh networks, we are mostly concerned with how devices are routing to and linking with one another. This means that most services you are used to running like HTTP or IRC daemons should be able to operate without a hitch. Additionally, you are presented with the choice of whether or not to create a darknet (completely separated from the Internet) or host exit nodes to allow your traffic out of the mesh.

Existing Community Mesh Networking Projects

One of the most well-known grassroots community mesh networks is Freifunk, based out of Germany, encompassing over 150 local communities with over 25,000 access points. based in Spain, boasts over 27,000 nodes spanning over 36,000 km. In North America we see projects like Hyperboria which connect smaller mesh networking communities together such as Seattle Meshnet, NYC Mesh, and Toronto Mesh. We also see standalone projects like PittMesh in Pittsburgh, WasabiNet in St. Louis, and People’s Open Network in Oakland, California.

While each of these mesh networks may run different software and have a different base of users, they all serve an important purpose within their communities. Additionally, many of these networks consistently give back to the greater mesh networking community and choose to share information about their hardware configurations, software stacks, and infrastructure. This only benefits those who want to start their own networks or improve existing ones.

Picking Your Hardware & OS

When I was first starting out with Philly Mesh, I was faced with the issue of acquiring hardware on a shoestring budget. Many will tell you that the best hardware is low-power computers with dedicated wireless cards. This however can incur a cost of several hundred dollars per node. Alternatively, many groups make use of SOHO routers purchased off-the-shelf, flashed with custom firmware. The most popular firmware used here is OpenWRT, an open source alternative that supports a large majority of consumer routers. If you have a relatively modern router in your house, there is a good chance it is already supported (if you are buying specifically for meshing, consider consulting OpenWRT’s wiki for compatibility. Based on Linux, OpenWRT really shines with its packaging system, allowing you to easily install and configure packages of networking software across several routers regardless of most hardware differences between nodes. With only a few commands, you can have mesh packages installed and ready for production.

Other groups are turning towards credit-card-sized computers like the BeagleBone Black and Raspberry Pi, using multiple USB WiFi dongles to perform over-the-air communication. Here, we have many more options for an operating system as many prefer to use a flavor of Linux or BSD, though most of these platforms also have OpenWRT support.

There are no specific wrong answers here when choosing your hardware. Some platforms may be better suited to different scenarios. For the sake of getting started, spec’ing out some inexpensive routers (aim for something with at least two radios, 8MB of flash) or repurposing some Raspberry Pis is perfectly adequate and will help you learn the fundamental concepts of mesh networking as well develop a working prototype that can be upgraded or expanded as needed (hooray for portable configurations). Make sure you consider options like indoor vs outdoor use, 2.4 GHz vs. 5 GHz band, etc.

Meshing Software

You have OpenWRT or another operating system installed, but how can you mesh your router with others wirelessly? Now, you have to pick out some software that will allow you to facilitate a mesh network. The first packages that you need to look at are for what is called the data link layer of the OSI model of computer networking (or OSI layer 2). Software here establishes the protocol that controls how your packets get transferred from node A to node B. Common software in this space is batman-adv (not to be confused with the layer 3 B.A.T.M.A.N. daemon), and open80211s, which are available for most operating systems. Each of these pieces of software have their own strengths and weaknesses; it might be best to install each package on a pair of routers and see which one works best for you. There is currently a lot of praise for batman-adv as it has been integrated into the mainline Linux tree and was developed by Freifunk to use within their own mesh network.

Revisiting the OSI model again, you will also need some software to work at the network layer (OSI layer 3). This will control your IP routing, allowing for each node to compute where to send traffic next on its forwarding path to the final destination on the network. There are many software packages here such as OLSR (Optimized Link State Routing), B.A.T.M.A.N (Better Approach To Mobile Adhoc Networking), Babel, BMX6, and CJDNS (Caleb James Delisle’s Networking Suite). Each of these addresses the task in its own way, making use of a proactive, reactive, or hybrid approach to determine routing. B.A.T.M.A.N. and OLSR are popular here, both developed by Freifunk. Though B.A.T.M.A.N. was designed as a replacement for OLSR, each is actively used and OLSR is highly utilized in the Commotion mesh networking firmware (a router firmware based off of OpenWRT).

For my needs, I settled on CJDNS which boasts IPv6 addressing, secure communications, and some flexibility in auto-peering with local nodes. Additionally, CJDNS is agnostic to how its host connects to peers. It will work whether you want to connect to another access point over batman-adv, or even tunnel over the existing Internet (similar to Tor or a VPN)! This is useful for mesh networks starting out that may have nodes too distant to connect wirelessly until more nodes are set up in-between. This gives you a chance to lay infrastructure sooner rather than later, and simply swap-out for wireless linking when possible. You also get the interesting ability to link multiple meshnets together that may not be geographically close.

Putting It Together

At this point, you should have at least one node (though you will probably want two for testing) running the software stack that you have settled on. With wireless communications, you can generally say that the higher you place the antenna, the better. Many community mesh groups try to establish nodes on top of buildings with roof access, making use of both directional antennas (to connect to distant nodes within the line of sight) as well as omnidirectional antennas to connect to nearby nodes and/or peers. By arranging several distant nodes to connect to one another via line of sight, you can establish a networking backbone for your meshnet that other nodes in the city can easily connect to and branch off of.

Gathering Interest

Mesh networks can only grow so much when you are working by yourself. At some point, you are going to need help finding homes for more nodes and expanding the network. You can easily start with friends and family – see if they are willing to host a node (they probably wouldn’t even notice it after a while). Otherwise, you will want to meet with like-minded people who can help configure hardware and software, or plan out the infrastructure. You can start small online by setting up a website with a mission statement and making a post or two on Reddit (/r/darknetplan in particular) or Twitter. Do you have hackerspaces in your area? Linux or amateur radio groups? A 2600 meeting you frequent? All of these are great resources to meet people face-to-face and grow your network one node at a time.


Starting a mesh network is easier than many think, and is an incredible way to learn about networking, Linux, micro platforms, embedded systems, and wireless communication. With only a few off-the-shelf devices, one can get their own working network set up and scale it to accommodate more users. Community-run mesh networks not only aid in helping those fed up with or persecuted by traditional network providers, but also those who want to construct, experiment, and tinker. With mesh networks, we can build our own future of communication and free the network for everyone.


I’m in 2600 Magazine

As of the Autumn 2016 issue, I now have an article appearing in 2600: The Hacker Quarterly! My article is titled “Building DIY Community Mesh Networks,” and covers topics in building and organizing local mesh networks.


The issue can be purchased in Barnes & Noble stores, as well as physically or digitally through the 2600 site and I will shortly be making the article available online as well.


The Evolution of Digital Nomadics

This article was originally written for and published at N-O-D-E on October 18th, 2016. It has been posted here for safe keeping.


In the Autumn of 1983, Steven K. Roberts pedaled off on a recumbent bicycle and pioneered a new revolution in the way people worked.


Stuck in the drudgery of suburban Ohio, Steve was bored. He had many possessions, a house, and work as a technology consultant and freelance writer. Steve desired adventure and felt like taking a risk, so he sold off all of his possessions, put his house on the market, cut ties with friends and family, and gave up his steady employment. He sacrificed the security he had built up over the years and invested in a custom bicycle, the “Winnebiko” which he would ride 10,000 miles across the U.S. for the next 18 months. “My world was no longer limited by the constraints of time and distance—or even responsibility. The thought was both delicious and unsettling, and I suddenly realized, alone in this unfamiliar city, that I was as close to ‘home’ as I would be for a long time,” Steve wrote in a book about his travels, Computing Across America, published in 1988.

The Winnebiko was not your ordinary bicycle. Apart from the custom frame and hand-picked parts, Steve outfitted his rig with solar panels, lights, radios, a security system, and most importantly a TRS-80 portable computer. Traveling the country from couch to hostel and everywhere in-between, Steve continued to work as a freelance writer, documenting his adventures. Jacking into borrowed phone lines for Internet access in the late night or writing from the comfort of an abandoned chair on the side of a snowy mountain, Steve was working in a way that was unconventional for the time.

Steve coined a term for himself, the “technomad,” combining the concepts of high-technology with traditional nomadics (the latter possibly being influenced in-part by nomadics as they were presented in Stewart Brand’s Whole Earth Catalog, a counter-culture publication promoting self-sufficiency and the do-it-yourself attitude in 1968). Later, Steve would construct more complex and technologically-enhanced bicycles for future long-term journeys.

The concept of “telecommuting” was not new in 1983, as the term had been created a decade earlier by Jack Niles, former NASA engineer, to describe remote work done via dumb terminal. By the 1990’s, after Steve’s original adventure, telecommuting had taken the world by storm and continued to grow. By the early 2010’s, almost half of the U.S. population reported to be working remotely at least part time. Remote work was starting to go mainstream.

But then there are people like Steve. What became of this movement to leave it all behind and work from the open road? By the late 1990’s we saw the use of the phrase “Digital Nomad” in the Makimoto and Manners book of the same name to explore the concept of digital nomadics and determine its sustainability. The infrastructure to support the lifestyle was improving as well. We saw the inclusion of WiFi technology in laptop computers and the rise of payment systems such as PayPal to support a generation of online-only workers on-the-move.

As time progressed, we only saw more of the tech-savvy convert to the rambling lifestyle, with bolder individuals traveling all over the world, settling down for days, weeks, or months at a time before picking up and starting all over. Today, more companies are providing this opportunity to their employees, with some outfits never actually meeting their workers face-to-face. Employees enjoy the flexibility while employers enjoy cherry-picking applicants from a larger pool and reduced overhead costs previously spent in office space. Various communities have popped up such as /r/digitalnomad ( and /r/vandwellers ( to offer support for the grizzled vagabonds and tips to the bright-eyed newcomers. Here, you may find advice for what to carry, how to travel on a shoe-string budget, and lists of companies that are nomad-friendly.

In popular culture, we see the idea of the digital nomad becoming more prevalent. For example, Ernest Cline’s 2011 novel Ready Player One features the character Aech who lives in and works out of a recreational vehicle. As the future comes into view, we can only expect more people to work remotely and live simply, embracing the freedom of change and fighting to avoid complacency. The technology is only becoming more accommodating as equipment becomes smaller, faster, and reliably connected in even the most rugged of situations. We not only see a rise in letting employees work where they want, but also when they want. Now that a network connection can exist within a jacket pocket, we are on the verge of the 24/7 worker, always on call. When your office isn’t anywhere, it’s everywhere. Some day soon, we may see digital nomads living in self-driving vehicles that methodically navigate the city limits while the occupant eats, sleeps, and works. Similar to Don DeLillo’s Cosmopolis, wherein the protagonist spends most of his day conducting business out of his moving soundproof, bulletproof limousine—a rolling fortress filled with computers and television screens—we may see this concept coming to fruition without the human behind the wheel.

As for Steve, he is still living the technomadic life, but is more drawn to the offerings of the water as opposed to the open road. “I’m now immersed in nautical projects, as well as building some substrate-independant technomadic tools,” Steve writes to me after I purchased a handful of issues of The Journal of High-Tech Nomadness, Steve’s own long out-of-print paper periodical.

Whether you do most of your work in an office or a coffee shop, you cannot deny that things are changing for the modern employee as they become more entwined with technology. “I’m riding a multi-megabyte Winnebiko with dozens of communications options, and more wonders lie just ahead,” Steve writes after upgrading his bicycle for his second journey. “[I]t is no longer very difficult to be a deeply involved, productive citizen of the world while wandering endlessly. Because once you move to Dataspace, you can put your body just about anywhere you like.”



The Hermicity Interview – Drones, DAO, & Deliverable Soylent

This article was originally written for and published at N-O-D-E (since removed) on May 20th, 2016. It has been posted here for safe keeping.




I recently spoke with John Dummett, the creator of H E R M I C I T Y, a project aiming to send packages of Soylent by drone to hermits living in remote areas who pay via smart contract through Ethereum.

John and I started exchanging messages on Reddit after I spotted a link he posted about his project in a decentralization-themed subreddit. Speaking with John, I could immediately pick up that he was a person who valued his privacy and his relationships. He was not, however, a shy person regarding his passions. Every question I posed was answered with an enthusiastic, complete explanation, welcome to assorted follow-ups and forgiving to my novice understanding of select technologies.

We waxed rhapsodic about the project and its bright potential. Often late at night for me in a dark room lit only by monitor glow, early afternoon for him the next day, I picked his brain and explored the radical concepts that seemed to ebb and flow organically without inhibition.

An 18-year-old Australian native, John already has professional work experience with block chain technology and a dream to change the world. He believes that the future we have been waiting for is now, and he doesn’t want anyone to miss it.

[N-] Could you explain what H E R M I C I T Y is for those who may not have heard of it?

[JD] H E R M I C I T Y is going to allow people to live alone. Up until very recent technological advancements, we were social creatures by necessity as living alone was costly, unsafe and uncomfortable. We’re going to make living alone accessible.

As for some concrete details, here is part of our general road-map and practical details section that will be revealed on the upcoming revamped website. We are going to develop a parent DAO to all the Hermicites. Members of the community will be able to submit proposals for a Hermicity to this DAO. They will use a standardised template that we will develop. The proposal will require information on where the Hermicity will be, how the land will be acquired (rent or buy, etc.), proof that the current custodian of that land is willing to have a Hermicity built on it, what type of micro-dwellings will be built on said land, how much they will cost, how many residencies to said Hermicity are up for grabs, the cost of the drones and other things to be delivered (Soylent, water, etc.)

From here, the total cost of the project is divided by the amount of residencies up for grabs and then there is a time-frame given for the project to be funded. Through multi-sig addresses we are going to build an auctioning system for people to bid on the residencies inside given Hermicities.

For instance, say I go and arrange for someone with a huge, beautiful and remote farm to let me have four hermits living on it for a cost of $20,000 a year. Then I find a company that can build a simple shack-like microdwelling for $20,000 each. (I haven’t done the modelling yet but [say] Soylent, water, other items for delivery cost $20,000 and then the drone itself is $10,000. Total cost to run the Hermicity for 1 year is $70,000. The cost of the residencies is $17,500.) I’d fill out the proposal form and submit it to the parent DAO.

The parent DAO then generates an array of ether addresses for this given proposal, of which the top 4 will be successful (as long as they each hold at least $17,500) and the rest of the addresses will be refunded. The ether funds then move from those addresses to begin paying for the Hermicity to be built and the winners get their residencies.

Unless the proposal for the relevant Hermicity has a plan of what to do with the extra funds (for instance you could say that if someone bids $25,000 then they can get a “better” microdwelling) then the extra funds will simply go to the parent DAO in order to fund the continued development of the entire project. Some of these excess funds could also be used for our team to sponsor people looking to create proposals.

We hope the community is passionate enough about this that they go out and try to create Hermicities all over the world. The new site revamp will include a forum section so we can start discussing what these first proposed Hermicities may look like.

It’s important to have this proposal process rather than me and my team making them all ourselves. By getting out of the way, the free market will ensue – we are not going to stop people from making any kind of proposal they like. Perhaps a Hermicity with more expensive residencies may allow for hot cooked meals to be drone-lifted to the hermits.

The reason I used Soylent as the example on the site is that it is nutritionally complete and cost effective, and [I] imagine that the first Hermicities will probably be as cost efficient and accessible as possible. Although there will probably eventually be more elaborate and fancy Hermicities like I described above, initially I think the market for Hermicities will mostly be asking for the most cost effective yet complete package as possible so Soylent’s nutritional balance and low cost seems perfect.

Furthermore, the same can be said for other parts of the Hermicity. Initial Hermicities may be hand-made shacks with limited features, whereas eventually you will see proper microdwellings with full blown heating and cooling, solar power, fast Internet connection, etc. It will be interesting to see at what comfort point the market starts at, though. The beauty is that what we are trying to build will allow other people to figure this out for us – [I] imagine we could have many Hermicities popping up all over the world before long.

[N-] How did you get the idea for H E R M I C I T Y?

[JD] For a long time I have been interested in making the hermit lifestyle more accessible and now that it looks like the technology is finally here to do it – for the first time in human history – it just made sense to execute this idea and try to get this done.

I’ve always been interested in the idea of living alone. This keen interest is really the result of a heap of things that have happened in my life. But really H E R M I C I T Y is at the intersection of a lot of really interesting technology and social movements. There is a lot of fascinating potential [for] this project when we think about it. What will [allow] people to easily spend time alone untapped? How low can we get the price of a residency, and how many people will be interested? If we no longer need each other to survive and people are satisfied living alone and comfortably with an Internet connection, what is the future of the nation state? (This last point is in my eyes the logical final state of Ethereum – technology autonomously running everything in a fashion that no human has to work or gets left behind.) Could the first settlement on Mars be a Hermicity?

Although I am sure we will be able to implement a lot of these ideas successfully – H E R M I C I T Y as a project is already satisfying the Ethereum community just as a thought piece that symbolizes what we are doing, what we value, and [what we] envision the future of the world could look like.

Outside of web development and design – I am an artist, writer and philosopher and I think this project reflects that.

[N-] Do you yourself have interest in living in solitude as a hermit? What advantages do you see with this lifestlye?

[JD] Human beings are social creatures – by necessity. For some of us, we are not social just by necessity, we are extroverts and enjoy each other’s company and being actively engaged with the people around us. Other people [are] only social by necessity, prefering to be alone and do not [enjoy] direct contact with other people.

The necessity part comes in because until very recently we have not had the technological capability to easily, comfortably and safely live alone. New technology makes the hermit lifestyle accessible.

The question Hermicity poses is, “how many people are interested in this now, and how many more people will be interested in this in the future?”

Almost every great thinker has spent much time alone. I know [that] the time I have spent alone has made me a wiser, more intelligent person. Perhaps by making [secclusion] more accessible we can unlock a lot of human potential that is standing idle at the moment, locked away in bodies that are too distracted by the other bodies around them and are therefore unable to look in and unlock their unique ideas and energy.

[N-] Where is H E R M I C I T Y in the development cycle? How far away would you say we are from seeing a deliverable?

[JD] Admittedly it’s still very early days, but I am working hard on the revamped site which will better explain what we are trying to do, how we will do it, and why it is important.

I have had a lot of people contacting me over the last week and will be looking to put together a proper team and road-map. Eventually I would like to work on this full time. Once the parent DAO has been developed along with the other technical framework and proposal template work completed, it will be up to the community we have built up to start delivering. Obviously we will be pretty instrumental in supporting them with this.

[N-] How many people are working on the project, what are their backgrounds?

[JD] At the moment we have three people working on the project. I am a front-end web developer and designer. We have another web developer and community manager who has extensive experience running events and managing communities. Finally, we have a cloud rapper and philosopher – we believe it is important to build a really optimistic and positive culture around our DAO because otherwise success will be much harder to attain. (Anything is possible, optimism and energy is what allows you to push harder and further.)

As I said before many more people have reached out. I will be contacting these people to try to build a small and efficient dev team. Thankfully I have been contacted by people with skills across a wide range of areas, most importantly in decentralised programming. We should be able to get that parent DAO up and running soon.

[N-] How did you get your start with blockchain technologies?

[JD] Last year I worked for one year with CoinJar, which is Australia’s largest bitcoin exchange. Working on the team there, we were constantly thinking about what to do next and a huge part of that is thinking about what’s possible using the blockchain.

The bitcoin blockchain and its potential is too limited, of course it’s always going to be amazing given that it was the first and it kicked started this whole movement – but Ethereum is superior in terms of technology and [the] core developer team so it’s where the best blockchain projects are going to happen, not bitcoin.

[N-] Tell me more about the technology, what are the advantages of adopting the DAO (decentralized autonomous organisations) and smart contracts? How will you interact with the drones?

[JD] Utilising Ethereum as the backbone of this project is so important, we couldn’t do this without Ethereum. DAOs offer interdependance – a perfect middle ground where individuals voluntarily interact with each other on terms that suit them, but still [tapping that] magic of when we work together. The beauty of DAOs is that they allow people to work together without compromising on their independence. Therefore, DAOs are the future – I believe a world run on DAOs is one that offers scalable order by allowing people to indirectly work together and to automate sharing of resources so that no one is left behind.

The parent DAO that I spoke of earlier with its world first actioning code will be a groundbreaker. The drone deliveries will be triggered by ether transactions as seen here ->

When people submit proposals to the parent DAO it will be up to perspective residents to look through the proposal and ensure that everything is up to scratch. If the proposal is successful the proposer will then build the Hermicity. We intend that we will provide DAPP (decentralized app) code to run the drone deliveries of Soylent (and other needs as specified in the proposal) in an autonomous fashion. Live by the DAO.

[N-] I’m wondering how this model would work with self-sufficiency. Would each hermit be beholden to whomever set up the Hermicity for a location, or could the actual inhabitants pool their resources on the drones/deliveries themselves? For example, could there be a situation where after you receive a delivery, you then charge the drone up for the next person, and then be rewarded for doing so automatically?

[JD] Each Hermicity will work differently depending on the proposal that was at the root of it being built. Some will be bare bones, others will be more sophisticated. It will be up to the proposers and the wants and needs of the market of hermits that develop.

I could tell you what I think will happen, but I would prefer to encourage the community to stay as open minded as possible. I’m looking forward to the different ideas that develop and what becomes de facto standards. I’m imagining there will be a competitive marketplace of Hermicity proposals.

[N-] Alot of people bring up how young you are? Do you find your age to be advantageous to working on this project?

[JD] Being young is crucial to the success of this project and that’s why I am a big fan of things like the Thiel Fellowship because they realize how important it is for young people with big ideas and a lot of energy (but lacking the right circumstance) to be able to have a crack.

It’s advantageous to be young because you generally have a more open mind and scope of possibility than older people in that you’ve had less time to develop prejudices and had less time to compromise on your own beliefs, etc. It’s important to start working really hard on these things now while I am young and not wait. When these projects and activities work out (which they do) and I progress, it allows me to keep being myself and to not stop compromising on my own ideas and beliefs.

I hope that either the community or movement around this project will grow large enough that I can be financially supported to work on this full time or that if that doesn’t happen, I will be able to get into a program like the Thiel Fellowship.

[N-] What are some of the biggest obstacles or barriers to entry you are facing right now? Are there any legalities you are worrying about with regards to operating drones?

[JD] The biggest barrier to entry is personal. I work full time so [I] don’t have as much time as I’d like to work on this, but hopefully these circumstances will change soon. In the meantime I am just working as hard as I can.

As far as legalities, by opening up the process so that anyone can propose a drone anywhere in the world, I envision that many teams of people will be working to get Hermicities set up all over the earth. Some jurisdictions will be easier to find arrangements with than others, but as I said before, large remote farms or other large privately owned lands would be a great starting spot I imagine. It will be interesting to see what proposals people come up with.

[N-] Will you be operating your own fleet of drones, or adopting a model like Uber/Lyft where you use or share time controlling drones owned by others?

[JD] Once again it’s simply up to the proposers. I imagine that for the sake of more secure deliveries it would be better for the drones to be owned by the DAO, if they have an autonomous solar/battery charging station, human interaction would be very rare and eventually completely unnecessary. [It] will be great to watch the proposers innovate in this area in particular.

[N-] Have you considered users besides hermits such as digital nomads? Any thought of a potential use for humanitarian aid?

[JD] There are many different potential use cases for this idea and the associated technology that will be developed. It will absolutely be up to the proposers to come up with the practicalities of executing these ideas. We will offer the technology and other framework based support for people to get started and then we will get out of their way.

[N-] How has feedback been so far since you have announced H E R M I C I T Y? Has anything surprised you?

[JD] The responses have been overwhelmingly positive and some of them really funny. I’ve had well over 100 responses now over email, and many people have inboxed via the HERMICITY Reddit account.

Vitalik’s (Co-Founder of Ethereum) tweet was really great, he got it.

His second in command also emailed me saying the concept was art, so he got it too which was great.

I haven’t received negative feedback, though there have been half a dozen or so emails from people who don’t get it.

The amount of people who want to help out is really high as well.

[N-] Do you currently have any specific roles on your team that need to be filled? How can people contribute?

[JD] We have been contacted by so many people (via, we are still trying to go though all the emails so we can start responding. Once we have a solid roadmap we can start advertising positions. There aren’t any major skill shortages at this [point], but I am looking forward to expanding the team when we get to that stage.



I Wrote An App

I’ve been putting off this post for a while. Not for any reason in particular, I just like to have things arranged in a certain way before I push them out to people.

This is analogous to the mobile app project this post refers to as a whole. In 2013, with the idea of a friend, I created a mobile application that allows a user to send a random insulting text to someone on their contacts list. It was for fun of course, and we called it BitchyTexts. It was (and still is) Android-only, and was developed over the course of a few weeks on the little time I had between classes. I distributed it to my friends, who distributed it to their friends, and the results were mostly positive. It was crude, and thrown together, but it worked and did its job well.

The next logical step of course was a Play Store release. However, I needed to clean my code up, get things under version control. and brave the submission process. I worked a little here and there, but ultimately getting the app out the door fell to the bottom of my priority list. In late 2015, two years after I decided I wanted to do a Play Store release, I picked development back up again and started knocking out little pieces here and there to reach my desired outcome.

This became one of my 2016 goals, and I was chomping at the bit to release something. There was no use sitting on it, store releases are an iterative process and I could always improve here and there after the application was live.

So, I submitted it. It was approved, and it’s out there for anyone to download and use. There are changes I want to make, and there are other things I want to work on for it (An improved website, back-end services, etc.) but those can come at any time. There is a lot of planning to do, but nothing too crazy.

BitchyTexts in action!

BitchyTexts in action!

Check it out here,

Let me know what you think!


I2P 101 – Inside the Invisible Internet

This article was originally written for and published at N-O-D-E on May 1st, 2016. It has been posted here for safe keeping.


The Invisible Internet Project (more commonly known as I2P) is an older, traditional darknet built from the ground up with privacy and security in mind. As with all darknets, accessing an I2P site or service is not as simple as firing a request off from your web browser as you would with any site on the traditional Internet (the clearnet). I2P is only accessible if you are running software built to access it. If you try to access an I2P service without doing your homework, you won’t be able to get anywhere. Instead of creating all new physical networking infrastructure, I2P builds upon the existing Internet to take care of physical connections between machines, creating what is known as an overlay network. This is similar to the concept of a virtual private network (VPN) wherein computers can communicate with one another comfortably, as though they were on a local area network, even though they may be thousands of miles apart.



I2P was first released in early 2003 (only a few months after the initial release of Tor), and was designed as a communication layer for existing Internet services such as HTTP, IRC, email, etc. Unlike the clearnet, I2P focuses on anonymity and peer-to-peer communications, relying on a distributed architecture model. Unlike Tor which is based around navigating the clearnet through the Tor network, I2P’s goal from the start was to create a destination network and was developed as such. Here, we see that the focus is on community and anonymity within it as opposed to anonymity when using the clearnet.


When you connect to I2P, you are automatically set up to be a router. If you are a router, you exist as a node on the network and participate in directing or relaying the flow of data. As long as you are on the network, you are always playing a part in keeping the traffic flowing. Other users may choose to configure their nodes as inproxies. Think of an inproxy as a way to get to an I2P service from the clearnet. For example, if you wanted to visit an eepsite (An anonymous site hosted on I2P, designated by a .i2p TLD) but we’re not on I2P, you could visit an inproxy through the clearnet to provide you access. Other users may choose to operate outproxies. An outproxy is essentially an exit node. If you are on I2P and want to visit a clearnet site or service, your traffic is routed through an outproxy to get out of the network.


There are numerous advantages to using I2P over another darknet such as Tor depending upon the needs of the user. With I2P, we see a strong focus on the anonymity of connections as all I2P tunnels are unidirectional. This means that separate lines of communication are opened for sending and receiving data. Further, tunnels are short-lived, decreasing the amount of information an attacker or eavesdropper could have access to. We also see differences in routing as I2P uses packet switching as opposed to circuit switching. In packet switching routing, messages are load balanced among multiple peers to get to the destination instead of a single route typical of circuit switching. In this scenario, I2P sees all peers participating in routing. I2P also implements distributed dissemination of network information. Peer information is dynamically and automatically shared across nodes instead of living on a centralized server. Additionally, we also see low overhead for running a router because every node is a router instead of a low percentage of those who choose to set one up.


I2P implements garlic routing as opposed to the more well known onion routing. Both garlic routing and onion routing rely on the technique of layered encryption. On the network, traffic flows through a series of peers on the way to its final destination. Messages are encrypted multiple times by the originator using the peers’ public keys. When the message is sent out on the path and decrypted by the proper corresponding peer in the sequence, only enough information to pass the message to the next node is exposed until the message reaches its destination where the original message and routing instructions are revealed. The initial encrypted message is layered and resembles an onion that has its layers peeled back on transit.

Garlic routing extends this concept by grouping messages together. Multiple messages referred to as “bulbs” are bound together, each with its own routing instructions. This bundle is then layered just like with onion routing and sent off to peers on the way to the destination. There is no set size for how many messages are included in one bundle, providing another level of complexity in message delivery.


Hundreds of sites and services exist for use within the I2P network, completely operated by the community. For example, Irc2P is the premier IRC network for chat. We see search engines like eepSites & Epsilon, and torrent trackers like PaTracker. Social networks like Id3nt (for microblogging) and Visibility (for publishing) are also abundant. If you can think of a service that can run on the network, it may already be operational.


I2P remains in active development with many releases per year and continues to be popular within its community. While I2P is not as popular as other darknets such as Tor, it remains to be a staple of alternative networks and is often praised for its innovative concepts.Though I2P does not focus on anonymous use of the clearnet, it is seeing active use for both peer-to-peer communication and file-sharing services.


While many may view I2P as just another darknet, it has many interesting features that aren’t readily available or implemented on other networks. Due to the community and regular updates, there is no reason to think that I2P will be going anywhere anytime soon and will only continue to grow with more awareness and support.

Over time, more and more people have embraced alternative networks and we are bound to see more usage on the horizon. However one of the points I2P maintainers express is that the network’s small size and limited adoption may be helpful at this point in time. I2P is not as prominent in the public’s field of view, possibly protecting it from negative publicity and potential attackers.

Whether or not I2P will keep hold of its core community or expand and change with time is unknown, but for now it proves to be a unique darknet implementation with a lot of activity.