How to Run your Own Independent DNS with Custom TLDs

This article was originally written for and published at N-O-D-E on September 9th, 2015. It has been posted here for safe keeping.



After reading what feels like yet another article about a BitTorrent tracker losing its domain name, I started to think about how trackers could have an easier time keeping a stable domain if they didn’t have to register their domain through conventional methods Among their many roles, The Internet Corporation for Assigned Names and Numbers (ICANN), controls domain names on the Internet and are well known for the work with the Domain Name System (DNS) specifically the operation of root name servers and governance over top level domains (TLDs).

If you ever register a domain name, you pick a name you like and head over to an ICANN-approved registrar. Let’s say I want my domain to be “”. I see if I can get a domain with “n-o-d-e” affixed to the TLD “.net” and after I register it, I’m presented with an easy-to-remember identification string which can be used by anyone in the world to access my website. After I map my server’s IP address to the domain, I wait for the new entry to propagate. This means that the records for my domain are added/updated in my registrar’s records. When someone wants to visit my website, they type out “” in their address bar of their browser and hit the enter key. In the background, their set name server (usually belonging to the ISP) checks to see who controls records for this domain, and then works its way through the DNS infrastructure to retrieve the IP address matching this domain name and returns it back to you.

It’s a reliable, structured system, but it is still controlled by an organization who has been known to retract domains from whoever they like. What if you could resolve domains without going through this central system? What if there was a way to keep sites readily accessible without some sort of governing organization being in control?

I’m not the first to think of answers to these questions. Years ago, there was a project called Dot-P2P which aimed to offer “.p2p” TLDs to peer-to-peer websites as a way of protecting them against losing their domains. While the project had notable backing by Peter Sunde of The Pirate Bay, it eventually stagnated and dissolved into obscurity.

The organization that would have handled the “.p2p” domain registrations, OpenNIC, is still active and working on an incredible project itself. OpenNIC believes that DNS should be neutral, free, protective of your privacy, and devoid of government intervention. OpenNIC also offers new custom TLDs such as “.geek” and “.free” which you won’t find offered through ICANN. Anyone can apply for a domain and anyone can visit one of the domains registered through OpenNIC provided they use an OpenNIC DNS server, which is also backwards-compatible with existing ICANN-controlled TLDs. No need to say goodbye to your favorite .com or .net sites.

If you have the technical know-how to run your own root name server and submit a request to OpenNIC’s democratic body, you too could manage your own TLD within their established infrastructure.

Other projects like NameCoin aim to solve the issue of revoked domains by storing domain data for its flagship “.bit” TLD within its blockchain. The potential use cases for NameCoin take a radical shift from simple domain registrations when you consider what developers have already implemented for storing assets like user data in the blockchain alongside domain registrations.

But what if I wanted to run my own TLD without anyone’s involvement or support, and still be completely free of ICANN control? Just how easy is it to run your own TLD on your own root name server and make it accessible to others around the world?


It turns out that running your own DNS server and offering custom TLDs is not as difficult as it first appears. Before I set out to work on this project, I listed some key points that I wanted to make sure I hit:

– Must be able to run my own top level domain
– Must be able to have the root server be accessible by other machines
– Must be backwards compatible with existing DNS

Essentially, I wanted my own TLD so I didn’t conflict with any existing domains, the ability for others to resolve domains using my TLD, and the ability for anyone using my DNS to get to all the other sites they would normally want to visit (like


For this guide, you are going to need a Linux machine (a virtual machine or Raspberry Pi will work fine). My Linux machine is running Debian. Any Linux distribution should be fine for the job, if you use something other than Debian you may have to change certain commands. You will also want a secondary machine to test your DNS server. I am using a laptop running Windows 7.

Knowledge of networking and the Linux command line may aid you, but is not necessarily required.


I needed DNS software to run on my Linux machine, and decided upon an old piece of software called BIND. BIND has been under criticism lately because of various vulnerabilities, so make sure that you read up on any issues BIND may be experiencing and understand the precautions as you would with any other software you may want to expose publicly. I am not responsible if you put an insecure piece of software facing the internet and get exploited.

It is important to note that I will be testing everything for this project on my local network. A similar configuration should work perfectly for any internet-facing server.

Other DNS software exists out there, but I chose BIND because it is something of a standard with thousands of servers running it daily in a production environment. Don’t discount other DNS packages! They may be more robust or secure and are definitely something to consider for a similar project.


Step 1. Initial Configuration

Connect your Linux machine to the network and check the network interface status.


The response to the command should look similar to this:

eth0      Link encap:Ethernet  HWaddr f0:0d:de:ad:be:ef
                         inet addr:  Bcast:  Mask:
                         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                         RX packets:8209495 errors:0 dropped:386 overruns:0 frame:0
                         TX packets:9097071 errors:0 dropped:0 overruns:0 carrier:0
                         collisions:0 txqueuelen:1000
                         RX bytes:2124485459 (1.9 GiB)  TX bytes:1695684733 (1.5 GiB)

Make sure your system is up-to-date before we install anything.

sudo apt-get update
sudo apt-get upgrade

Step 2. Installing & Configuring BIND

Change to the root user and install BIND version 9. Then stop the service.

su -
apt-get install bind9
/etc/init.d/bind9 stop

Now that BIND is installed and not running, let’s create a new zone file for our custom TLD. For this example, I will be using “.node” as my TLD but feel free to use any TLD of your choosing.

cd /etc/bind

Paste the following into the file and edit any values you may see fit, including adding any domains with corresponding IP addresses. For a full explanation of these options visit which has a nice write-up on the format of a zone file. I did find that I needed to specify a NS SOA record with a corresponding A record or BIND would not start.

As you see below, a lot of this zone file is boilerplate but I did specify a record for “google” which signifies that “google.node” will point to the IP address “”

When you are done editing, save the file with CTRL-X.

       ; BIND data file for TLD “.node”
       $TTL    604800  ; (1 week)
       @       IN      SOA     node. root.node. (
       2015091220      ; serial (timestamp)
       604800          ; refresh (1 week)
       86400           ; retry (1 day)
       2419200         ; expire (28 days)
       604800 )        ; minimum (1 week)
       @         IN    NS    ns1.node.    ; this is required
       ;@        IN    A         ; unused right now, semicolon comments out the line
       google  IN    A
       ns1       IN    A         ; this is also required

Now, we need to edit the default zones configuration file to include our new zone.

nano named.conf.default-zones

A the bottom, paste the following block to add our new zone to the configuration.

zone “node.” {
                       type master;
                       file “/etc/bind/”;
                       allow-transfer { any;};
                       allow-query { any;};

Now find the block in the file similar to the below:

zone “.” {
               type hint;
               file “/etc/bind/db.root”;

Replace this block with the following to make our root server a slave to master root server This is one of OpenNIC’s public DNS servers and by marking it as a master, we can also resolve OpenNIC TLDs as well as any TLDs under control of ICANN.

zone “.” in {
                  type slave;
                  file “/etc/bind/db.root”;
                  masters {; };
                 notify no;

After saving the file, we want to generate a new root hints file which queries OpenNIC. This can be done with the dig command.

dig . NS @ > /etc/bind/db.root

Finally, restart BIND.

/etc/init.d/bind9 restart

You should see:

[ ok ] Starting domain name service…: bind9.

Configuration on the server on your Linux machine is now done!

Step 3. Configure Other Machines to Use Your Server

On your Windows machine (on the same local network), visit the Network Connections panel by going to Control Panel -> Network and Internet -> Network Connections.

Right-click on your current network connection and select Properties. On the resulting Network Connection Properties dialog, select Internet Protocol Version 4 (TCP/IPv4) if you are using IPv4 for your local network or Internet Protocol Version 6 (TCP/IPv6). Since I am using IPv4, I will be selecting the former.

Next, click the Properties button. On the resulting Internet Protocol Properties dialog, select the radio button for “Use the following DNS server addresses.” Enter the IP address of your Linux machine in the Preferred DNS server box ( from my example, but make sure you use the IP address of your Linux machine) and then click the OK button. Back on the Network Connection Properties dialog, click the Close button.

Now, load up a command shell and ping one of our defined domains.

ping google.node

You should see the following:

Pinging google.node [] with 32 bytes of data:
Reply from bytes=32 time=15ms TTL=55
Reply from bytes=32 time=17ms TTL=55
Reply from bytes=32 time=16ms TTL=55

Congratulations, you now have a DNS server which will not only resolve your custom TLD but be accessible to other machines.


This is just a proof of concept, and could easily be expanded upon for future projects. If you are wondering where to go from here, you could easily move on to make your DNS publicly accessible and expand the offerings. Further, you could construct multiple DNS nodes to act as slaves or links to your root server as a method of distributing the network to make it more reliable and geographically accessible

While I don’t think many BitTorrent trackers will be quick to adopt a system such as this, it still shows that you can create and resolve custom TLDs which may be useful for constructing alternative networks.




[WANTED] Videostatic (1989)

Through a strange series of links, I have become aware of a 1989 film called Videostatic. Distributed independently for $10/tape, Videostatic looks like some sort of insane hodgepodge of clips and video effects that I am strangely drawn to.


Here is a synopsis written around 1998 from Gareth Branwyn’s Street Tech,

This is a 60-minute audio-visual journey to the edges of alternative art-making and experimental video. The tape is divided up into four sections: “Poems” (intuitive, non-narrative, alogical), “Paintings” (video equivalent to the conventional canvas), “Stories” (Event-based sequences), and “Messages” (rhetorical stances, public “service” announcements). The most impressive pieces here are “Sex with the Dead,” a video memory-jog of our morbidly nostalgic culture by Joe Schwind, “Of Thee I Sing/Sing/Sing,” a musique concrete video by Linda Morgan-Brown and the Tape-Beatles, and “Glossolalia,” (Steve Harp) an absolutely mind-fucking excursion into language, synaesthetic experience, and the structuring of human thought and perception. Surrounded by the curious, the kooky, and the just plain boring (as kook-tech artist Douglass Craft likes to say: “Not every experiment was a success.”) At only $10, this is an insane bargain.

Videostatic compilers John Heck and Lloyd Dunn (of Tape-Beatles’ fame) plan on putting out a series of these tapes. As far as we know, 1989 is the latest release. Write for more info (or to submit material).

911 North Dodge St.
Iowa City, IA 52245
$10/ 60-minute VHS cassette

I’m looking for this in any format, digital or physical.

I’m not quite sure what I’m in store for.


EDIT: Here is some additional information from the PhotoStatic Archive,

VideoStatic 1989 was released in June, 1989. It is a video compilation along much the same lines as the PhonoStatic cassettes. It contains roughly an hour of video and film work by both networking artists and Iowa City locals. It was edited by John Heck and Lloyd Dunn. At the time of this writing (6/90) VideoStatic 1990 was not yet begun, but plans are underway. It will be edited by Linda-Morgan Brown and Lloyd Dunn.


The New Wild West

This article was originally written for and published at N-O-D-E on August 3rd, 2015. It has been posted here for safe keeping.


A few years ago, I was fortunate enough to work professionally with low energy RF devices under a fairly large corporation. We concerned ourselves with wireless mesh networking and were responsible for tying together smart devices, like light bulbs or door locks installed in your home, into an information-driven digital conglomerate. You know those commercials you see on TV where the father remotely unlocks the door for his child or the businesswoman checks to make sure she left the patio light on? That was us. At the touch of a button on your tablet, miles away, you can open the garage door or flip on the air conditioner. These are products that are designed to make life easier.

In research and development, we view things differently than the stressed-out, on-the-go homeowner might. We don’t necessarily think about what the user might want to buy, but ask the question, “when we roll these things out, how will people try to exploit and break them?” In the confines of a tall, mirror-glass office building, my packet sniffer lights up like a Christmas tree. Devices communicate in short bursts through the airwaves, chirping to one another for all to hear. Anyone with the curiosity and some inexpensive hardware can pick up this kind of traffic. Anyone can see what is traveling over the air. Anyone can intervene.



Things weren’t so different a few decades ago. Back in the ‘70s we saw the rise of the phone phreak. Explorers of the telephone system, these pioneers figured out how to expertly maneuver through the lines, routing their own calls and inching further into the realm of technological discovery. We saw innovators like John Draper and even Steve Wozniak & Steve Jobs peeking into the phone system to see how it ticks and what secrets they could unlock. It wasn’t long before people started connecting their personal microcomputers to the phone line, lovingly pre-installed in their houses for voice communication, and explored computerized telephone switches, VAXen, and other obscure machines — not to mention systems controlled by third parties outside the grasp of good old Ma Bell.

This was the wild west, flooded by console cowboys out to make names for themselves. The systems out there were profoundly unprotected. And why not? Only people who knew about these machines were supposed to be accessing them, no use wasting time to think about keeping things secure. Many machines were simply out there for the taking, with nobody even contemplating how bored teenagers or hobbyist engineers might stumble across them and randomly throw commands over the wire. If you had a computer, a modem, and some time on your hands, you could track down and access these mysterious systems. Entire communities were built around sharing information to get into computers that weren’t your own, and more of these unsecured systems popped up every week. It seemed like the possibilities were endless for the types of machines you would be able to connect to and explore.

Today, many will argue that we focus much more on security. We know that there are those who are going to probe our systems and see what’s open, so we put up countermeasures: concrete walls that we think and hope can keep these minds out. But what about newer technologies? How do we handle the cutting edge? The Internet of Things is still a relatively new concept to most people — an infant in the long-running area of computing. We have hundreds if not thousands of networked devices that we blindly incorporate into our own technological ecosystems. We keep these devices in our homes and on our loved ones. There are bound to be vulnerabilities, insecurities, cracks in the armor.


Maybe you don’t like the idea of outlets that know what is plugged into them or refrigerators that know when they’re out of food. Maybe you’re a technological hold-out, a neo-luddite, a cautious person who needs to observe and understand before trusting absolutely. This may feel like the ultimate exercise of security and self-preservation, but how much is happening outside of your control?

When the concept of ubiquitous computing was first developed by Mark Weiser at Xerox PARC in the late ‘80s, few knew just how prominent these concepts would be in 25 years. Ubiquitous computing pioneered the general idea of “computing everywhere” through the possibility of small networked devices distributed through day-to-day life. If you have a cellular telephone, GPS, smart watch, or RFID-tagged badge to get into the office, you’re living in a world where ubiquitous computing thrives.

We’ve seen a shift from the centralized systems like mainframes and minicomputers to these smaller decentralized personal devices. We now have machines, traditional personal computers and smart-phones included, that can act independent of a centralized monolithic engine. These devices are only getting smaller, more inexpensive, and more available to the public. We see hobby applications for moisture sensors and home automation systems using off-the-shelf hardware like Arduinos and Raspberry Pis. The technology we play with is becoming more independant and increasingly able when it comes to autonomous communication. Little intervention is needed from an operator, if any is needed at all.

For all of the benefits we see from ubiquitous computing, there are negatives. While having a lot of information at our fingertips and an intuitive process to carry out tasks is inviting, the intrusive nature of the technology can leave many slow to adopt. As technology becomes more ubiquitous, it may also become more pervasive. We like the idea of a smart card to get us on the metro, but don’t take so kindly to knowing we are tracked and filed with every swipe. Our habits have become public record. In the current landscape of the “open data” movement, everything from our cell phone usage to parking ticket history can become one entry in a pool of data that anyone can access. We are monitored whether we realize it or not.


We have entered uncharted territory. As more devices make their way to market, the more possibilities there are for people to explore and exploit them. Sure, some vendors take security into consideration, but nobody ever thinks their system is vulnerable until it is broken. Consider common attacks we see today and how they might ultimately evolve to infect other platforms. How interesting would it be if we saw a DDoS attack that originated from malware found on smart dishwashers? We have these devices that we never consider to be a potential threat to us, but they are just as vulnerable as any other entity on the web.

Consider the hobbyists out there working on drones, or even military applications. Can you imagine a drone flying around, delivering malware to other drones? Maybe the future of botnets is an actual network of infected flying robots. It is likely only a matter of time before we have a portfolio of exploits which can hijack these machines and overthrow control.

Many attacks taken on computer systems in the present day can trace their roots back over decades. We see a lot of the same concepts growing and evolving, changing with the times to be more efficient antagonists. We could eventually see throwbacks to the days of more destructive viruses appear on our modern devices. Instead of popping “arf arf, gotcha!” on the screen and erasing your hard drive, what if we witnessed a Stuxnet-esque exploit that penetrates your washing machine and shrinks your clothes by turning the water temperature up?

I summon images from the first volume of the dystopian Transmetropolitan. Our protagonist Spider Jerusalem returns to his apartment only to find that his household appliance is on drugs. What does this say about our own future? Consider Amazon’s Echo or even Apple’s Siri. Is it only a matter of time before we see modifications and hacks that can cause these machine to feel? Will our computers hallucinate and spout junk? Maybe my coffee maker will only brew half a pot before it decides to no longer be subservient in my morning ritual. This could be a far-off concept, but as we incorporate more smart devices into our lives, we may one day find ourselves incorporated into theirs.


Just as we saw 30 years ago, there is now an explosion of new devices ready to be accessed and analyzed by a ragtag generation of tinkerers and experimenters. If you know where to look, there is fruit ripe for the picking. We’ve come around again to a point where the cowboys make their names, walls are broken down, and information is shared openly between those who are willing to find it. I don’t know what the future holds for us as our lives become more intertwined with technology, but I can only expect that people will continue to innovate and explore the systems that compose the world around them.

And with any hope, they’ll leave my coffee maker alone.



Automating Site Backups with Amazon S3 and PHP

This article was originally written for and published at TechOats on June 24th, 2015. It has been posted here for safe keeping.


I host quite a few websites. Not a lot, but enough that the thought of manually backing them up at any regular interval fills me with dread. If you’re going to do something more than three times, it is worth the effort of scripting it. A while back I got a free trial of Amazon’s Web Services, and decided to give S3 a try. Amazon S3 (standing for Simple Storage Service) allows users to store data, and pay only for the space used as opposed to a flat rate for an arbitrary amount of disk space. S3 is also scalable; you never have to worry about running out of a storage allotment, you get more space automatically.

S3 also has a web services interface, making it an ideal tool for system administrators who want to set it and forget it in an environment they are already comfortable with. As a Linux user, there were a myriad of tools out there already for integrating with S3, and I was able to find one to aide my with my simple backup automation.

First things first, I run my web stack on a CentOS installation. Different Linux distributions may have slightly different utilities (such as package managers), so these instructions may differ on your system. If you see anything along the way that isn’t exactly how you have things set up, take the time and research how to adapt the processes I have outlined.

In Amazon S3, before you back up anything, you need to create a bucket. A bucket is simply a container that you use to store data objects within S3. After logging into the Amazon Web Services Console, you can configure it using the S3 panel and create a new bucket using the button provided. Buckets can have different price points, naming conventions, or physical locations around the world. It is best to read the documentation provided through Amazon to figure out what works best for you, and then create your bucket. For our purposes, any bucket you can create is treated the same and shouldn’t cause any problems depending on what configuration you wish to proceed with.

After I created my bucket, I stumbled across a tool called s3cmd which allows me to interface directly with my bucket within S3.

To install s3cmd, it was as easy as bringing up my console and entering:

sudo yum install s3cmd

The application will install, easy as that.

Now, we need a secret key and an access key from AWS. To get this, visit and click the plus icon next to Access Keys (Access Key ID and Secret Access Key). Now, you can click the button that states Create New Access Key to generate your keys. They should display in a pop-up on the page. Leave this pop-up open for the time being.

Back to your console, we need to edit s3cmd’s configuration file using your text editor of choice, located in your user’s home directory:

nano ~/.s3cfg

The file you are editing (.s3cfg) needs both the access key and the secret key from that pop-up you saw earlier on the AWS site. Edit the lines beginning with:

access_key = XXXXXXXXXXXX
secret_key = XXXXXXXXXXXX

Replacing each string of “XXXXXXXXXXXX” with your respective access and secret keys from earlier. Then, save the file (CTRL+X in nano, if you are using it).

Now we are ready to write the script to do the backups. For the sake of playing different languages, I chose to write my script using PHP. You could accomplish the same behavior using Python, Bash, Perl, or other languages, though the syntax will differ substantially. First, our script needs a home, so I created a backup directory to house the script and any local backup files I create within my home directory. Then, I changed into that directory and started editing my script using the commands below:

mkdir backup
cd backup/
nano backup.php

Now, we’re going to add some code to our script. I’ll show an example for backing up one site, though you can easily duplicate and modify the code for multiple site backups. Let’s take things a few lines at a time. The first line starts the file. Anything after <?php is recognized as PHP code. The second line sets our time zone. You should use the time zone of your server’s location. It’ll help us in the next few steps.


So now we dump our site’s database by executing the command mysqldump through PHP. If you don’t run MySQL, you’ll have to modify this line to use your database solution. Replace the username, password, and database name on this line as well. This will allow you to successfully backup the database and timestamp it for reference. The following line will archive and compress your database dump using gzip compression. Feel free to use your favorite compression in place of gzip. The last line will delete the original .sql file using PHP’s unlink, since we only need the compressed one.

exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/".date('Y-m-d').".tar.gz ~/backup/".date('Y-m-d').".sql");

The next line will archive and gzip your site’s web directory. Make sure you check the directory path for your site, you need to know where the site lives on your server.

exec("tar -zcvf ~/backup/".date('Y-m-d').".tar.gz /var/www/public_html/");

Now, an optional line. I didn’t want to keep any web directory backups older than three months. This will delete all web directory backups older than that. You can also duplicate and modify this line to remove the database archives, but mine don’t take up too much space, so I keep them around for easy access.

@unlink("~/backup/".date('Y-m-d', strtotime("now -3 month")).".tar.gz");

Now the fun part. These commands will push the backups of your database and web directory to your S3 bucket. Be sure to replace U62 with your bucket name.

exec("s3cmd -v put ~/backup/".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/".date('Y-m-d').".tar.gz s3://U62");

Finally, end the file, closing that initial <?php tag.


Here it is all put together (in only ten lines!):

exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/".date('Y-m-d').".tar.gz ~/backup/".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/".date('Y-m-d').".tar.gz /var/www/public_html/");
@unlink("~/backup/".date('Y-m-d', strtotime("now -3 month")).".tar.gz");
exec("s3cmd -v put ~/backup/".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/".date('Y-m-d').".tar.gz s3://U62");

Okay, now our script is finalized. You should now save it and run it with the command below in your console to test it out!

php backup.php

Provided you edited all the paths and values properly, your script should push the two files to S3! Give yourself a pat on the back, but don’t celebrate just yet. We don’t want to have to run this on demand every time we want a backup. Luckily we can automate the backup process. Back in your console, run the following command:

crontab -e

This will load up your crontab, allowing you to add jobs to Cron: a time based scheduler. The syntax of Cron commands is out of the scope of this article, but the information is abundant online. You can add the line below to your crontab (pay attention to edit the path of your script) and save it so it will run on the first of every month.

0 0 1 * * /usr/bin/php /home/famicoman/backup/backup.php



[WANTED] How to Build A Red Box VHS

I was looking though old issues of Blacklisted! 411 and found an advertisement in a 1995 issue for a 60 minute VHS tape about how to build a red box using a Radio Shack pocket tone dialer. For those who don’t know, red boxes were popular in the ’90s and used by phreakers, scammers, and those who just wanted free payphone calls. By modifying pocket dialers (or even just recording sounds that coins made as they were dropped into a phone), anyone could make a red box which would mimic the tones produced when coins were inserted into a payphone. This means that anywhere you take your red box, you can play back the tones and get free phone calls.

Anyway, this video was made and sold in 1995 by East America Company in Englewood, New Jersey. It retailed for $39 (Plus $5 shipping) and I would love a copy. See the image below for a review of the tape and the original advertisement.



[WANTED] Let’s Find All The TechTV VHS Tapes

TechTV, the 24-hour technology-oriented cable channel was a never ending source of inspiration to me when I was growing up. Back then, TechTV was only available in my area on digital cable, a newfangled platform that people didn’t want to pay the extra money for. By the time I had ditched analog cable, TechTV was long gone, absorbed into G4, with any programming carried over reduced to a shell of its former self.

Back in the heyday (2001-2002 for this example), TechTV decided to release direct-to-video VHS tapes of various one-off programs and specials designed as something of an informational/instructional reference. I remember these being advertised, and the concept excited me as it was a way to get TechTV content without needing the cable service. That said, I never got to view a single one of these tapes. Unfortunately, they were priced a just a little too high. It was a big financial investment for an hour of content.

Over a decade later, a few of these tapes have made their way online. By my research, I can find that TechTV produced six (Maybe more?) VHS tapes, three of which some kind souls have digitized and put on YouTube or The Internet Archive. But, that means that there are three other tapes out there which I or anyone else have a hard time getting at unless we want to spend a bunch of money working our way through Amazon resellers. Not something I want to do. To add to this, the digitized videos available online are not the best quality. Again, thanks to the kind souls who went through the trouble, but I would really like to see the maximum quality squeezed out of these bad boys. Over the years, there have been many efforts to find old TechTV recordings from over-the-air, but these tapes remain mostly lost.

Let’s try to fix that.

I’ve created a project page to track as much information as I can on these tapes and am looking for any people who can create digital rips themselves or send these tapes my way to rip. I can’t offer any money to buy them, but you’ll be doing a good service getting these videos out there. Once again, I would like to find fresh sources for each of the tapes listed if possible.

The titles I can find existing are as follows, let me know if I missed any:

  • TechTV’s Digital Audio for the Desktop (2001)
  • TechTV’s How to Build Your Own PC (2001)
  • TechTV’s Digital Video for the Desktop (2002)
  • TechTV Solves Your Computer Problems (2002)
  • TechTV’s How to Build a Website (2002)
  • TechTV’s How to Build a Website (2002)

    Programs from High School

    I’ve taken some time over the past two days to dig through some of my old flash drives for old programs I wrote in high school. I found most of my flash drives, and while a few had been re-purposed over the years, I ended up finding a lot of content I created over the course of my pre-college schooling.

    I didn’t find everything. When I started taking electives in high school, I first enrolled in a web design class. This was basic Photoshop, HTML, and Dreamweaver development. I can’t really find any of this stuff anymore. I also took a CAD class where I used AutoCad, Inventor, and Revit. These files look to be gone as well. More notably, I took a series of programming-heavy classes: Introduction to Programming (C++), Advanced Programming (C++), AP Computer Science (Java), Advanced Programming (Java), and Introduction to Video Game Design (Games Factory and DarkBasic).

    Even when I took these classes a little over five years ago, the curriculum was outdated. DarkBasic was never really a mainstream programming language, and Games Factory was more event mapping than it ever was programming. We learned C++ using a 1996 version of Visual Studio and only learned the concepts of object oriented design later in the advanced class. The Java curriculum was brand new to the school when I took it, but still felt outdated.

    That said, I learned a hell of a lot here that would lay a foundation for my future education and career path.

    I took the time to copy these files from my flash drives and push them to a GitHub group I created for people who took these same classes as I did. The hope here is that the people I had class with will ultimately share and submit their creations, and we can browse these early samples of our programming. Unfortunately, I couldn’t find any of my Java programs yet, but I might be able to come up with more places to look.

    Just digging through my source code, I’ve already found a lot of weird errors and places where dozens of lines of code could be replaced with one or two.

    It’s interesting to look back on these programs, and maybe they’ll give you a laugh if you care to check them out. I know they gave me one.


    ChannelEM and TechTat are now Archived

    Since both TechTat and ChannelEM are essentially no longer updated, I didn’t want to have to worry about maintaining them on the server. I’ve backed up their installations, and created static html versions of each website which are now up at the URLs below.

    The static sites are not perfect, and may have some missing thumbnails, background images, or pages that were created on-the-fly by applications. Regardless, all the pages and their information should be intact. The original domains should work until I let them expire, but for now they will offer up a redirect to the new static sites. Neither Techtat nor ChannelEM ever got the traction I hoped they would, though they proved to be interesting projects when they were active. ChannelEM in particular, when it worked, worked very well and I would love to apply the approach of an online television station developed there to another project down the road.

    Until then, I’ll narrow my focus a bit, and continue my “Spring Cleaning” as best as I can.


    Philosophy of Data Organization

    I would be a liar if I said I was an overly organized person. I believe that like things should be grouped together and everything is to have its place, but I follow something of a level of acceptable chaos. Nothing is organized completely, and I don’t really believe it is possible to have complete organization on a large enough scale. Complete organization is likely to cause insanity.

    When I first started accumulating data, I quickly outgrew my laptop’s 80 gigabyte hard drive. From there I went to a 150GB drive, then a pair of 320GB drives, then a pair of 1TB drives, then a pair of 2TB drives, and from there I keep amassing even more 2TB drives. As I get new drives, I like to rotate the data off of the older ones and on to the newer ones. These old drives become work horses for torrents and rendering off video while new drives are used for duplicating and storing data that I really want to keep around for a long long time. The system is ad-hoc without any calculated sense of foresight. If I had the money and planning, I’d build a giant NAS for my needs. For now, whenever I need more space, I just buy another pair of drives and fill them up before repeating the cycle. This doesn’t scale very well and I ultimately around 25TB of storage scattered across various drives.

    A few months ago, I was fortunate enough to take a class on the philosophy of mind and knowledge organization. A mouthful of a topic, I know, but it is more simple than it seems. The class revolved around one main concept: classification. We started with concepts put forth by Greek philosophers on how to organize knowledge via the study of knowledge: epistemology. We start out with concepts put forth by Socrates, Plato and Aristotle. Notably, university subjects were broken into the trivium (grammar, logic, and rhetoric) and later expanded with the quadrivium (arithmetic, geometry, music, and astronomy) as outlined by Plato. These subjects categorized the liberal arts, based on thinking, as opposed to the practical arts, based on doing. These classifications were standard in educational systems for some time.

    The Trivium

    A representation of the Trivium

    Aristotle reclassifies knowledge later by breaking everything into three categories: theoretical, practical, and productive. This is again broken down further. Aristotle breaks “theoretical” into metaphysics, mathematics, and physics. “Productive” is broken into crafts and fine arts. “Practical” is broken down into ethics, economics, and politics. From here, we have a more modern approach to knowledge organization. We see distinctive lines between subjects which are further branched into more specific subjects. We also see a logical progression from theoretical to practical, and finally to productive to ultimately create a product.

    An outline of Aristotle's classification

    An outline of Aristotle’s classification

    More modern classifications pull directly from these Greek outlines. We can observe works by Hugo St. Victor and St. Bonaventure which mash various aspects of these classifications together to create hybrid classifications which may or may not be more successful in breaking down aspects of the world.

    An interpretation of St. Bonaventure's organization

    An interpretation of St. Bonaventure’s organization

    What does this have to do with data? Data, much like knowledge, can be organized using the same principles we have observed here. Remember, the key theme here is classification. We are not simply concerned with how to break up knowledge, but anything and everything that can be classified.

    Think of all the possible ways you could organize films or musical artists, or even genres of music. It can be a daunting thing to even imagine. As an overarching project throughout the course, we developed classifications of our own choice. I chose to focus on videotape formats, and quickly created my own classification based on physical properties. I broke down tapes into open/closed reel, tape widths, and format families. While it might not be the best classification, I tried to approach the problem in a way that was open to using empirical truth (conformity through observations) in a way which would allow a newcomer to quickly traverse the classification branches to discover what format he is holding in his hands.

    An early version of my videotape classification

    An early version of my videotape classification

    Classifications like this are not uncommon. Apart from the classifications of knowledge put forward here already, classifications have been used by Diderot and d’Alembert to create the first encyclopaedia in 1759. This Encyclopédie uses a custom classification of knowledge as its table of contents. While generalized to an extent (it does fit one page), it could be expanded upson infinitely.

    Encyclopédie contents

    Encyclopédie contents

    A contemporary way to organize knowledge arrives in a familiar area: the Dewey Decimal System. Though Dewey’s system has been adopted globally as the de facto method for organizing print media, can can we apply this same system to our growing “library” of data? The short answer is no, not without some modification, though modifications have plagued Dewey’s system since its inception.

    To understand how we can best organize our data, we must first understand the general concepts of the Dewey Decimal System. Within the system, different categories are defined by different numbers. 100 may be reserved for philosophy and psychology while 300 may be used for social sciences, and 800 for literature. the numbering system here is intentional. Lower numbers are thought to be the most important subjects while higher numbers are less important. These numbers are broken down further. 100 might be broken into 110 for metaphysics, 120 for epistemology, etc. with each of these being broken down again for more finite subjects.

    This is just another classification, but it has its faults. The size of a section is finite as the system is broken up into 10 classes which are then again broken down into 10 divisions, and finally 10 sections (hence decimal). However, we never really accounted for the growth of new and expanding topics. As subjects emerge like computer science, which Dewey never could have imagined, we throw works like these into unused spaces. Computer science in particular is infamous as it now occupies location 000, which in the system would make it seem more important than any other subject in the entire system. Additionally, we see a loss in physical ties to the system as libraries are intended to to organized along with the system: lower numbers on the first floor, higher numbers on the higher floors. Dewey’s system is constantly being modified as new works emerge and finding any consistency between different libraries could be controlled by one librarian who chooses whether or not to implement a change at any given time.

    A simplified example of Dewey's system

    A simplified example of Dewey’s system

    While a modified version of Dewey’s system might make sense for data (as well as being somewhat familiar), we have to consider another problem which plagues the classification: titles that can occupy more than one section. Suppose that I have a book about WWII music. Do I put this book in music? Does it go in history? What other sections could it fall into? We have few provisions for this.

    Data is no different in this sense. Whether I have a digital copy of a book as would be found in Dewey’s system or a podcast or anything else, there is always the potential for multiple areas a work can fall into. If you visit the “wrong” section where you might expect an object to be, you don’t have any indication that it would be somewhere else just as suitable.

    What are we to do in this case? While I like to break my data down into type of media (video, audio, print, etc), I find the lower levels to get more fuzzy. Let us consider a subject which I am revisiting in my own projects: hacker/cyberpunk magazines. Even if we only focus on print magazines, we still have problems. We can see the concept of “hacking” coming from more traditional clever programming origins (such as in Dr. Dobb’s Journal), or evolved from phreaker culture (such as in TEL), or maybe from general yippie counterculture (such as in YIPL). Additionally, we can see that some of these magazines feature a large number of overlapping collaborators which make them feel somewhat similar. We also may observe that magazines produced in places like San Francisco or Austin also have a similar feel but might be much closer to other works that have no physical or personnel ties. Further, what about publications that started as print and then went over to online releases? More and more possible subgroups emerge.

    At this point, we might consider work put forth by Wittgenstein which is based off of the “family resemblance theory.” The basic idea behind this theory is that while many members of a family might have features that make them resemble the family, not one feature is shown in all the members who have family resemblance. Expanded, we can say that while we all know what something means, it can’t always be clearly defined and its boundaries cannot always be sharply drawn. Rosch, a psychology professor, took Wittgenstein’s concept further and hypothesized that “the task of categorization systems is to provide maximum information with the least cognitive effort.” She believes that basic-level objects should have “as many properties as possible predictable from knowing any one property.” This means that if something is part of a category, you could easily know much more about it (if you know that 2600 is a hacking magazine, you’ll know there are likely articles in it about computers). However, superordinate categories (like furniture or vehicle) wouldn’t share many attributes with each other. Rosch concluded that most categories do not have clear-cut boundaries and are difficult to classify. This goes on to show the concept that “messiness begins within.” We get a contrast from Aristotelian “orderliness” because messiness shows that we can’t put things in their place because those places are just where things “sort-of” belong. Everything belongs in more than one place, even if it is just a little bit. We see that order can be restrictive.

    This raises the importance of metadata: data about data. While my media might be organized in such a classification that doesn’t allow for “double dipping” (going against concepts by Rosch), we can utilize the different properties that pertain to each individual object. Consider many popular torrent sites which utilize crowd-sourced tagging systems. Members can add tags to individual pieces of media (which can then me voted on as a way to weed out improper tags) which allow the media to show up in searches for each tag. We see a similar phenomenon in websites such as Youtube which allow tagging of videos for content, though not in a crowd-sourced sense or the Internet Archive which supports general subject tags as well as more specific metadata fields.

    Using this metadata method and my previous example, it’s easy to find magazines by location, authors, subject, contents, age, and a long list of other attributes. We can apply this to objects that aren’t the same format; there are examples of video, audio, and print that pertain to the same subjects, authors, etc. This isn’t an impossible implementation. Considering further the Internet Archive, we see thousands upon thousands of metadata-rich items which are easily searchable and identifiable. However, the Internet Archive also suffers from a lackluster interface. It might be easy to find issues of Byte magazine, but it is a lot more difficult to figure out what issues we are missing or see an organizational flow more akin to a wiki system (though both systems lend themselves well to items being in more than one place). A hybridized system like this would be an option worth exploring, but I haven’t seen an ideal execution of it yet.

    While this concept of a metadata-based organizational system isn’t a fool-proof solution, it can certainly be seen as a step in the right direction. We must also consider the credibility of those who decide to make contributions to metadata, especially on a large-scale public system. Consider the chaos and political makeup of how Wikipedia governs editing and then you’ll start to get an idea. While I’d like to implement a tagging system for my own personal media library (with my own tagging at first and the possibility of expansion), I am limited by my current conglomeration of hard drives scattered to different parts of the house, usually powered off. My next storage solution will take these ideas into planning and execution, making my data much easier to traverse. I will however have limitations as I won’t have many people perpetually reviewing and tagging my data with relevant information.

    That said, the idea of being able to make my data more accessible is an exciting one, and increases portability of the data as a whole if I ever need to pass it on to others. As my tastes evolve and grow, so will the collection of data I hold.

    With any hope, my organized chaos will ultimately become a little more organized and a little less chaotic.

    With any luck, you’ll be able to browse it one day.


    Archiving Radio

    A few months ago, I got involved with my university’s radio station. It happened unexpectedly. I was out with some friends in the city and two of us made our way back to the school campus. My friend, a member of the station, had to run inside to check something out and ended up calling me in because there was some older gear that he wanted me to take a look at. I was walked past walls of posters and sticker-covered doors to the engineering closet. The small space was half the size of an average bedroom, but was packed to the brim with decades of electronics. Needless to say, I was instantly excited to be there and started digging through components and old part boxes. A few weeks later, after emailing back and forth with a few people, I became something of an adjunct member with a focus in engineering. This meant anything from fixing the doorbell to troubleshooting server issues, the modified light fixtures, the broken Ms. Pac-Man arcade machine, or a loose tone-arm on a turntable. There are tons of opportunities for something to do, all of which I have found enjoyment in so far.

    Let’s take a step back. This radio station isn’t a new fixture by any means. I feel that when people think of college radio these days they imagine a mostly empty room with a sound board and a computer. Young DJ’s, come in, hook up their iPod, and go to work.

    This station is a different animal. Being over 50 years old means a lot has come and gone in the way of popular culture as well as technology. When I first came in and saw the record library contained (at a rough estimate) over 40,000 vinyl records, I knew I was in the right place. I began to explore. I helped clean out the engineering room, looked through the production studio, and learned the basics of how the station operated. After a few weeks, I learned that the station aimed to put out a compilation on cassette tape for the holiday season. One of the first tasks would be to get some 50 station identifications off of a minidisc to use between songs. Up to the task, I brought in my portable player and with the help of a male/male 3.5mm stereo cable and another member’s laptop, got all the identifications recorded. While the station borrowed a cassette duplicator for the compilation, it would still take a long time to produce all the copies, so I brought in a few decks of my own and tested some of the older decks situated around the station. It was my first time doing any sort of mass duplication, but I quickly fell into a grove of copying, sound checking, head and roller cleaning, and packaging. If felt good contributing to the project knowing I had something of a skill with, and large supply of old hardware.

    A little later, I took notice of several dust-coated reels in the station’s master control room containing old syndicated current-event shows from the ’80s and ’90s. I took these home to see if I could transfer them over to digital. I ran into some problems early one with getting my hardware to simply work. I have, at the time of writing, six reel-to-reel decks, all of which have some little quirk or issue except one off-brand model from Germany. I plugged it in, wired it to my computer via RCA to 3.5mm stereo cable, and hit record in Audacity. The end result was a recording in nice quality.

    Stacks of incoming reels

    Stacks of incoming reels.

    I decided to go a little further and use this to start something of an archive for the radio station. I saved the files using PCM signed 16 bit WAV, and also encoded a 192kbps MP3 file for ease of use and then scanned the reel (or box it was in) for information on the recording, paying attention to any additional paper inserts. I scanned these in 600dpi TIFF files which I then compressed down to JPG (again, for ease of use). Any interesting info from the label or technical abnormalities were placed in the file names, along with as much relevant information I could find. I also made sure to stick this information in the correct places for the ID3 tags. Lastly, I threw these all into a directory on a server I rent so anyone with the address can access them. I also started asking for donations of recordings, of which I received a few, and put them up as well.

    What's up next?

    What’s up next?

    After I transferred all the reels I could find (about 10), I went on the hunt for more. Now, until this point, I had broadcast quality 7-inch reels that ran at 7.5ips (inches per second) with a 1/4-inch tape width. A lot of higher quality recordings are done on 10.5-inch reels that run at 15ips, though sometimes 7-inch reels are used for 15ips recordings. Reel-to-reel tape can also be recorded at other speeds (such as 30ips or 3.75ips), but I haven’t come across any of these besides recordings I have made. Now, while my decks can fit 7-inch reels okay, they can’t handle any 10.5-inch reels without special adapters (called NAB hubs) to mount them on the spindles which I currently don’t have. Additionally, there are other tape widths such as 1/2-inch which I don’t have any equipment to play. The last problem I encounter is that I don’t have any machines that can run at 15ips.

    Next up...

    In progress.

    Doing more exploratory work, I got my hands on several more 7-inch reels and also saw some 10.5-inch reels housing tape of various widths. Some of the 7-inch reels I found run at 15ips, and while I don’t have a machine that does this natively, I’ve found great success in recording at 7.5ips and speeding up the track by 100% so the resulting audio plays twice as fast. As for the larger reels, I may be able to find some newly-produced NAB hubs for cheap, but they come with usage complaints. While original hubs would be better to use, they come with a steep price tag. There is more here to consider than might be thought at first. Additionally, there is a reel-to-reel unit at the station that though unused for years is reported to work and be able to handle larger reels and higher speeds. However, it is also missing a hub and the one it has doesn’t seem to come close to fitting a 10.5-inch reel properly. At the moment, there doesn’t look to be anything I can use to play 1/2-inch width tape, but I’m always on the hunt for more hardware.

    There are literally hundreds of reels at the station that haven’t been touched in years and need to be gone through, it’s a long process but it yields rewarding results. I’ve found strange ephemera: people messing with the recorder, old advertisements, and forgotten talk shows. I’ve also found rare recordings featuring interviews with bands as well as them performing. This is stuff that likely hasn’t seen any life beyond these reels tucked away in storage. So back to transferring I go, never knowing what I will find along the way

    Digitizing in process


    From this transferring process I learned a lot. Old tape can be gummy and gunk up the deck’s heads (along with other components in the path). While it is recommended to “bake” (like you would a cake in an oven) tape that may be gummy, it can be difficult to determine when it is needed until you see the tape jamming in the machine. Baking a tape also requires that it is on a metal reel while most I have encountered are on plastic. Additionally, not all tape has been stored properly. While I’ve been lucky not to find anything too brittle, I’ve seen some tape separating in chunks from its backing or chewed up to the point that it doesn’t even look like tape anymore. More interesting can be some of the haphazard splices which may riddle a tape in more than one inopportune spot or be made with non-standard types of tape. I’ve also noticed imperfections in recording, whether that means the levels are far too low, there’s signs of a grounding loop, or the tape speed is changed midway through the recording. For some reels there is also a complete lack of documentation. I have no idea what I’m listening to.

    I try to remedy these problems best I can. I clean my deck regularly: heads, rollers, and feed guides. I also do my best to document what I’ve recorded. I listen to see if I can determine what the audio is, determine the proper tape speed, figure out if the recording is half track (single direction, “Side A” only) or quarter track (both directions, “Side A + B”), and determine if the recording is in mono or stereo. Each tape that goes through me is labelled with said information and any information about defects in the recording that I couldn’t help mitigate.

    After dealing with a bad splice that came undone, I’ve also gone ahead and purchased a tape splicer/trimmer to hopefully help out if this is to happen again. As for additional hardware, I’m always on the lookout for better equipment with more features or capabilities. I don’t know what I’ll ultimately get my hands on, but I know that anything I happen to obtain will lend a hand in this archiving adventure and help preserve some long-forgotten recordings.

    After doing this enough times, I’ve started to nail down a workflow. I put all the tapes in a pile for intake, and choose one to transfer. I then feed it into the machine, hit record in Audacity, and hit play on the deck. After recording, I trim any lead-in silence, speed correct, and save my audio files. At this point, I also play the tape in the other direction to wind it back to its original reel and see if there are any other tracks on it. From here, I label my files, and go on to make scans of the reels or boxes before then loading these images into Photoshop for cropping and JPG exporting.

    All done.

    All done.

    It is a lot of work, but I can easily crank out a few reels a day by setting one and going about with my normal activities, coming back periodically to check progress. I have many more reels to sift through, but I hope one day to get everything transferred over – or at least as much as I can. Along the way, I’ve come across other physical media to archive. There are zines, cassette tapes, and even 4-track carts that are also sitting away in a corner, being saved for a rainy day.

    I’ll keep archiving and uncovering these long forgotten recordings. All I can hope for is that some time, somewhere, someone finds these recordings just as interesting as I do.

    Even if nobody does, I sure have learned a lot. With any luck, I’ll refine my skills and build something truly awesome in the process.