Philosophy of Data Organization


I would be a liar if I said I was an overly organized person. I believe that like things should be grouped together and everything is to have its place, but I follow something of a level of acceptable chaos. Nothing is organized completely, and I don’t really believe it is possible to have complete organization on a large enough scale. Complete organization is likely to cause insanity.

When I first started accumulating data, I quickly outgrew my laptop’s 80 gigabyte hard drive. From there I went to a 150GB drive, then a pair of 320GB drives, then a pair of 1TB drives, then a pair of 2TB drives, and from there I keep amassing even more 2TB drives. As I get new drives, I like to rotate the data off of the older ones and on to the newer ones. These old drives become work horses for torrents and rendering off video while new drives are used for duplicating and storing data that I really want to keep around for a long long time. The system is ad-hoc without any calculated sense of foresight. If I had the money and planning, I’d build a giant NAS for my needs. For now, whenever I need more space, I just buy another pair of drives and fill them up before repeating the cycle. This doesn’t scale very well and I ultimately around 25TB of storage scattered across various drives.

A few months ago, I was fortunate enough to take a class on the philosophy of mind and knowledge organization. A mouthful of a topic, I know, but it is more simple than it seems. The class revolved around one main concept: classification. We started with concepts put forth by Greek philosophers on how to organize knowledge via the study of knowledge: epistemology. We start out with concepts put forth by Socrates, Plato and Aristotle. Notably, university subjects were broken into the trivium (grammar, logic, and rhetoric) and later expanded with the quadrivium (arithmetic, geometry, music, and astronomy) as outlined by Plato. These subjects categorized the liberal arts, based on thinking, as opposed to the practical arts, based on doing. These classifications were standard in educational systems for some time.

The Trivium

A representation of the Trivium

Aristotle reclassifies knowledge later by breaking everything into three categories: theoretical, practical, and productive. This is again broken down further. Aristotle breaks “theoretical” into metaphysics, mathematics, and physics. “Productive” is broken into crafts and fine arts. “Practical” is broken down into ethics, economics, and politics. From here, we have a more modern approach to knowledge organization. We see distinctive lines between subjects which are further branched into more specific subjects. We also see a logical progression from theoretical to practical, and finally to productive to ultimately create a product.

An outline of Aristotle's classification

An outline of Aristotle’s classification

More modern classifications pull directly from these Greek outlines. We can observe works by Hugo St. Victor and St. Bonaventure which mash various aspects of these classifications together to create hybrid classifications which may or may not be more successful in breaking down aspects of the world.

An interpretation of St. Bonaventure's organization

An interpretation of St. Bonaventure’s organization

What does this have to do with data? Data, much like knowledge, can be organized using the same principles we have observed here. Remember, the key theme here is classification. We are not simply concerned with how to break up knowledge, but anything and everything that can be classified.

Think of all the possible ways you could organize films or musical artists, or even genres of music. It can be a daunting thing to even imagine. As an overarching project throughout the course, we developed classifications of our own choice. I chose to focus on videotape formats, and quickly created my own classification based on physical properties. I broke down tapes into open/closed reel, tape widths, and format families. While it might not be the best classification, I tried to approach the problem in a way that was open to using empirical truth (conformity through observations) in a way which would allow a newcomer to quickly traverse the classification branches to discover what format he is holding in his hands.

An early version of my videotape classification

An early version of my videotape classification

Classifications like this are not uncommon. Apart from the classifications of knowledge put forward here already, classifications have been used by Diderot and d’Alembert to create the first encyclopaedia in 1759. This Encyclopédie uses a custom classification of knowledge as its table of contents. While generalized to an extent (it does fit one page), it could be expanded upson infinitely.

Encyclopédie contents

Encyclopédie contents

A contemporary way to organize knowledge arrives in a familiar area: the Dewey Decimal System. Though Dewey’s system has been adopted globally as the de facto method for organizing print media, can can we apply this same system to our growing “library” of data? The short answer is no, not without some modification, though modifications have plagued Dewey’s system since its inception.

To understand how we can best organize our data, we must first understand the general concepts of the Dewey Decimal System. Within the system, different categories are defined by different numbers. 100 may be reserved for philosophy and psychology while 300 may be used for social sciences, and 800 for literature. the numbering system here is intentional. Lower numbers are thought to be the most important subjects while higher numbers are less important. These numbers are broken down further. 100 might be broken into 110 for metaphysics, 120 for epistemology, etc. with each of these being broken down again for more finite subjects.

This is just another classification, but it has its faults. The size of a section is finite as the system is broken up into 10 classes which are then again broken down into 10 divisions, and finally 10 sections (hence decimal). However, we never really accounted for the growth of new and expanding topics. As subjects emerge like computer science, which Dewey never could have imagined, we throw works like these into unused spaces. Computer science in particular is infamous as it now occupies location 000, which in the system would make it seem more important than any other subject in the entire system. Additionally, we see a loss in physical ties to the system as libraries are intended to to organized along with the system: lower numbers on the first floor, higher numbers on the higher floors. Dewey’s system is constantly being modified as new works emerge and finding any consistency between different libraries could be controlled by one librarian who chooses whether or not to implement a change at any given time.

A simplified example of Dewey's system

A simplified example of Dewey’s system

While a modified version of Dewey’s system might make sense for data (as well as being somewhat familiar), we have to consider another problem which plagues the classification: titles that can occupy more than one section. Suppose that I have a book about WWII music. Do I put this book in music? Does it go in history? What other sections could it fall into? We have few provisions for this.

Data is no different in this sense. Whether I have a digital copy of a book as would be found in Dewey’s system or a podcast or anything else, there is always the potential for multiple areas a work can fall into. If you visit the “wrong” section where you might expect an object to be, you don’t have any indication that it would be somewhere else just as suitable.

What are we to do in this case? While I like to break my data down into type of media (video, audio, print, etc), I find the lower levels to get more fuzzy. Let us consider a subject which I am revisiting in my own projects: hacker/cyberpunk magazines. Even if we only focus on print magazines, we still have problems. We can see the concept of “hacking” coming from more traditional clever programming origins (such as in Dr. Dobb’s Journal), or evolved from phreaker culture (such as in TEL), or maybe from general yippie counterculture (such as in YIPL). Additionally, we can see that some of these magazines feature a large number of overlapping collaborators which make them feel somewhat similar. We also may observe that magazines produced in places like San Francisco or Austin also have a similar feel but might be much closer to other works that have no physical or personnel ties. Further, what about publications that started as print and then went over to online releases? More and more possible subgroups emerge.

At this point, we might consider work put forth by Wittgenstein which is based off of the “family resemblance theory.” The basic idea behind this theory is that while many members of a family might have features that make them resemble the family, not one feature is shown in all the members who have family resemblance. Expanded, we can say that while we all know what something means, it can’t always be clearly defined and its boundaries cannot always be sharply drawn. Rosch, a psychology professor, took Wittgenstein’s concept further and hypothesized that “the task of categorization systems is to provide maximum information with the least cognitive effort.” She believes that basic-level objects should have “as many properties as possible predictable from knowing any one property.” This means that if something is part of a category, you could easily know much more about it (if you know that 2600 is a hacking magazine, you’ll know there are likely articles in it about computers). However, superordinate categories (like furniture or vehicle) wouldn’t share many attributes with each other. Rosch concluded that most categories do not have clear-cut boundaries and are difficult to classify. This goes on to show the concept that “messiness begins within.” We get a contrast from Aristotelian “orderliness” because messiness shows that we can’t put things in their place because those places are just where things “sort-of” belong. Everything belongs in more than one place, even if it is just a little bit. We see that order can be restrictive.

This raises the importance of metadata: data about data. While my media might be organized in such a classification that doesn’t allow for “double dipping” (going against concepts by Rosch), we can utilize the different properties that pertain to each individual object. Consider many popular torrent sites which utilize crowd-sourced tagging systems. Members can add tags to individual pieces of media (which can then me voted on as a way to weed out improper tags) which allow the media to show up in searches for each tag. We see a similar phenomenon in websites such as Youtube which allow tagging of videos for content, though not in a crowd-sourced sense or the Internet Archive which supports general subject tags as well as more specific metadata fields.

Using this metadata method and my previous example, it’s easy to find magazines by location, authors, subject, contents, age, and a long list of other attributes. We can apply this to objects that aren’t the same format; there are examples of video, audio, and print that pertain to the same subjects, authors, etc. This isn’t an impossible implementation. Considering further the Internet Archive, we see thousands upon thousands of metadata-rich items which are easily searchable and identifiable. However, the Internet Archive also suffers from a lackluster interface. It might be easy to find issues of Byte magazine, but it is a lot more difficult to figure out what issues we are missing or see an organizational flow more akin to a wiki system (though both systems lend themselves well to items being in more than one place). A hybridized system like this would be an option worth exploring, but I haven’t seen an ideal execution of it yet.

While this concept of a metadata-based organizational system isn’t a fool-proof solution, it can certainly be seen as a step in the right direction. We must also consider the credibility of those who decide to make contributions to metadata, especially on a large-scale public system. Consider the chaos and political makeup of how Wikipedia governs editing and then you’ll start to get an idea. While I’d like to implement a tagging system for my own personal media library (with my own tagging at first and the possibility of expansion), I am limited by my current conglomeration of hard drives scattered to different parts of the house, usually powered off. My next storage solution will take these ideas into planning and execution, making my data much easier to traverse. I will however have limitations as I won’t have many people perpetually reviewing and tagging my data with relevant information.

That said, the idea of being able to make my data more accessible is an exciting one, and increases portability of the data as a whole if I ever need to pass it on to others. As my tastes evolve and grow, so will the collection of data I hold.

With any hope, my organized chaos will ultimately become a little more organized and a little less chaotic.

With any luck, you’ll be able to browse it one day.

Tags: , , , , , , ,

Archiving Radio


A few months ago, I got involved with my university’s radio station. It happened unexpectedly. I was out with some friends in the city and two of us made our way back to the school campus. My friend, a member of the station, had to run inside to check something out and ended up calling me in because there was some older gear that he wanted me to take a look at. I was walked past walls of posters and sticker-covered doors to the engineering closet. The small space was half the size of an average bedroom, but was packed to the brim with decades of electronics. Needless to say, I was instantly excited to be there and started digging through components and old part boxes. A few weeks later, after emailing back and forth with a few people, I became something of an adjunct member with a focus in engineering. This meant anything from fixing the doorbell to troubleshooting server issues, the modified light fixtures, the broken Ms. Pac-Man arcade machine, or a loose tone-arm on a turntable. There are tons of opportunities for something to do, all of which I have found enjoyment in so far.

Let’s take a step back. This radio station isn’t a new fixture by any means. I feel that when people think of college radio these days they imagine a mostly empty room with a sound board and a computer. Young DJ’s, come in, hook up their iPod, and go to work.

This station is a different animal. Being over 50 years old means a lot has come and gone in the way of popular culture as well as technology. When I first came in and saw the record library contained (at a rough estimate) over 40,000 vinyl records, I knew I was in the right place. I began to explore. I helped clean out the engineering room, looked through the production studio, and learned the basics of how the station operated. After a few weeks, I learned that the station aimed to put out a compilation on cassette tape for the holiday season. One of the first tasks would be to get some 50 station identifications off of a minidisc to use between songs. Up to the task, I brought in my portable player and with the help of a male/male 3.5mm stereo cable and another member’s laptop, got all the identifications recorded. While the station borrowed a cassette duplicator for the compilation, it would still take a long time to produce all the copies, so I brought in a few decks of my own and tested some of the older decks situated around the station. It was my first time doing any sort of mass duplication, but I quickly fell into a grove of copying, sound checking, head and roller cleaning, and packaging. If felt good contributing to the project knowing I had something of a skill with, and large supply of old hardware.

A little later, I took notice of several dust-coated reels in the station’s master control room containing old syndicated current-event shows from the ’80s and ’90s. I took these home to see if I could transfer them over to digital. I ran into some problems early one with getting my hardware to simply work. I have, at the time of writing, six reel-to-reel decks, all of which have some little quirk or issue except one off-brand model from Germany. I plugged it in, wired it to my computer via RCA to 3.5mm stereo cable, and hit record in Audacity. The end result was a recording in nice quality.

Stacks of incoming reels

Stacks of incoming reels.

I decided to go a little further and use this to start something of an archive for the radio station. I saved the files using PCM signed 16 bit WAV, and also encoded a 192kbps MP3 file for ease of use and then scanned the reel (or box it was in) for information on the recording, paying attention to any additional paper inserts. I scanned these in 600dpi TIFF files which I then compressed down to JPG (again, for ease of use). Any interesting info from the label or technical abnormalities were placed in the file names, along with as much relevant information I could find. I also made sure to stick this information in the correct places for the ID3 tags. Lastly, I threw these all into a directory on a server I rent so anyone with the address can access them. I also started asking for donations of recordings, of which I received a few, and put them up as well.

What's up next?

What’s up next?

After I transferred all the reels I could find (about 10), I went on the hunt for more. Now, until this point, I had broadcast quality 7-inch reels that ran at 7.5ips (inches per second) with a 1/4-inch tape width. A lot of higher quality recordings are done on 10.5-inch reels that run at 15ips, though sometimes 7-inch reels are used for 15ips recordings. Reel-to-reel tape can also be recorded at other speeds (such as 30ips or 3.75ips), but I haven’t come across any of these besides recordings I have made. Now, while my decks can fit 7-inch reels okay, they can’t handle any 10.5-inch reels without special adapters (called NAB hubs) to mount them on the spindles which I currently don’t have. Additionally, there are other tape widths such as 1/2-inch which I don’t have any equipment to play. The last problem I encounter is that I don’t have any machines that can run at 15ips.

Next up...

In progress.

Doing more exploratory work, I got my hands on several more 7-inch reels and also saw some 10.5-inch reels housing tape of various widths. Some of the 7-inch reels I found run at 15ips, and while I don’t have a machine that does this natively, I’ve found great success in recording at 7.5ips and speeding up the track by 100% so the resulting audio plays twice as fast. As for the larger reels, I may be able to find some newly-produced NAB hubs for cheap, but they come with usage complaints. While original hubs would be better to use, they come with a steep price tag. There is more here to consider than might be thought at first. Additionally, there is a reel-to-reel unit at the station that though unused for years is reported to work and be able to handle larger reels and higher speeds. However, it is also missing a hub and the one it has doesn’t seem to come close to fitting a 10.5-inch reel properly. At the moment, there doesn’t look to be anything I can use to play 1/2-inch width tape, but I’m always on the hunt for more hardware.

There are literally hundreds of reels at the station that haven’t been touched in years and need to be gone through, it’s a long process but it yields rewarding results. I’ve found strange ephemera: people messing with the recorder, old advertisements, and forgotten talk shows. I’ve also found rare recordings featuring interviews with bands as well as them performing. This is stuff that likely hasn’t seen any life beyond these reels tucked away in storage. So back to transferring I go, never knowing what I will find along the way

Digitizing in process

Digitizing.

From this transferring process I learned a lot. Old tape can be gummy and gunk up the deck’s heads (along with other components in the path). While it is recommended to “bake” (like you would a cake in an oven) tape that may be gummy, it can be difficult to determine when it is needed until you see the tape jamming in the machine. Baking a tape also requires that it is on a metal reel while most I have encountered are on plastic. Additionally, not all tape has been stored properly. While I’ve been lucky not to find anything too brittle, I’ve seen some tape separating in chunks from its backing or chewed up to the point that it doesn’t even look like tape anymore. More interesting can be some of the haphazard splices which may riddle a tape in more than one inopportune spot or be made with non-standard types of tape. I’ve also noticed imperfections in recording, whether that means the levels are far too low, there’s signs of a grounding loop, or the tape speed is changed midway through the recording. For some reels there is also a complete lack of documentation. I have no idea what I’m listening to.

I try to remedy these problems best I can. I clean my deck regularly: heads, rollers, and feed guides. I also do my best to document what I’ve recorded. I listen to see if I can determine what the audio is, determine the proper tape speed, figure out if the recording is half track (single direction, “Side A” only) or quarter track (both directions, “Side A + B”), and determine if the recording is in mono or stereo. Each tape that goes through me is labelled with said information and any information about defects in the recording that I couldn’t help mitigate.

After dealing with a bad splice that came undone, I’ve also gone ahead and purchased a tape splicer/trimmer to hopefully help out if this is to happen again. As for additional hardware, I’m always on the lookout for better equipment with more features or capabilities. I don’t know what I’ll ultimately get my hands on, but I know that anything I happen to obtain will lend a hand in this archiving adventure and help preserve some long-forgotten recordings.

After doing this enough times, I’ve started to nail down a workflow. I put all the tapes in a pile for intake, and choose one to transfer. I then feed it into the machine, hit record in Audacity, and hit play on the deck. After recording, I trim any lead-in silence, speed correct, and save my audio files. At this point, I also play the tape in the other direction to wind it back to its original reel and see if there are any other tracks on it. From here, I label my files, and go on to make scans of the reels or boxes before then loading these images into Photoshop for cropping and JPG exporting.

All done.

All done.

It is a lot of work, but I can easily crank out a few reels a day by setting one and going about with my normal activities, coming back periodically to check progress. I have many more reels to sift through, but I hope one day to get everything transferred over – or at least as much as I can. Along the way, I’ve come across other physical media to archive. There are zines, cassette tapes, and even 4-track carts that are also sitting away in a corner, being saved for a rainy day.

I’ll keep archiving and uncovering these long forgotten recordings. All I can hope for is that some time, somewhere, someone finds these recordings just as interesting as I do.

Even if nobody does, I sure have learned a lot. With any luck, I’ll refine my skills and build something truly awesome in the process.

Tags: , , , , , , , , , , , ,

Just Meshing Around


This article was originally written for and published at Philly Mesh on January 28th, 2014. It has been posted here for safe keeping.

The first time I remember hearing about mesh networks was sometime around 2005. Through rigorous searches, I had finally tracked down a complete run of Seattle Wireless TV, a proto-podcast that ran from July of 2003 until June of 2004. This hunt was undergone for my own personal interests; I was and am something of an online-video-series junkie, and I have since posted all the episodes for download on Archive.org where they will be preserved for anyone to watch for years to come. The topics of these episodes varied from interviews with operators, to wardriving tips, and even antenna creation. Pretty popular topics back then, but now the show serves as a fantastic time capsule from a technologically-simpler time. Even ten years ago, “getting into” wireless networking seemed radically different. Everyone tried their hand at wardriving, embraced 802.11g, and wired cantennas to their Orinoco cards. Here is a prime example of the times — some Seattleites setting up their own mesh network in 2002. Essentially, Wi-Fi was king and you could have it in your own home. I didn’t end up jumping into the mix until years later. I got my first laptop in 2006 and even then I usually embraced a wired connection. Watching these video shows was my own little outlet into what the cool kids were doing. It wasn’t until a little later that I decided it was time to play.

In 2007, I received a La Fonera router from Fon courtesy of a free giveaway (I actually managed to snag one on the very last day they offered the promotion). I thought it might be cool to join their Wi-Fi collective, but I was much more interested in what else I could do with the device. The day it came in the mail I promptly researched what others were doing with it and joined in on the popular act of flashing dd-wrt firmware onto the little device to get some expanded functionality. This process was harder than I expected and my lack of knowledge on the subject at the time showed. After many frustrating hours  flipping back and forth between telnet, tftp, and IRC chatter  I had a fully functioning dd-wrt router of my very own. While this was a feat all in itself, it went on to inspire me to see what I could do with other routers. I soon grew a little collection of second-hand Linksys WRT54G routers to tinker with and take up space on my work bench. I tried out different firmwares like OpenWrt and Tomato and always tried to keep something new running on a separate network for me to play with so I didn’t accidentally bring down the whole house’s internet access with a bad flash or misconfiguration.

Years later, I ended up working with wireless technology in a professional capacity. However, I was no longer handling everyone’s favorite suite of 802.11 protocols but the new-fangled 802.15.4 for low-rate wireless personal area networks. I focused on the ZigBee specification and its derivatives, which were and are a popular choice for technologies like home automation systems, wireless switches, electrical meters, etc. I spent months toying with the technology, working to understand the encryption, capture and dissect the traffic, and create and transmit my own custom packets. While the technology itself was enough to hold my interest, I felt a draw toward the technology’s use of wireless mesh networking to create expansive networks.

This wasn’t my first foray into the world of mesh networking per se. Prior to my work with ZigBee, I focused on meshing briefly to combat network interruption when creating the topology for a hobby-run IRC network I was administrating. This was, however, my first time applying mesh ideas wirelessly. I quickly learned the ins and outs of the Zigbee specification and the overarching 802.15.4 standard, but I couldn’t help thinking about how these technologies applied to Wi-Fi and how much fun an 802.11 mesh network would be.

Soon, I discovered the existence of Philly Mesh, a Philadelphia-based mesh network in its infancy that connected with Hyperboria: a global decentralized network of nodes running cjdns. I made a few posts to its subreddit, added my potential node to the map, and ordered some TP-Link routers to play with. While the group seemed to be gathering support, it ultimately (and much to my dismay) stagnated. Expansion stopped and communication dwindled. People disappeared and services started to fall apart. Over the next year I tried to work through getting my own node up but hit several setbacks. I bricked a router, ran into configuration problems, suffered from outdated or missing documentation, and then bricked another router. Eventually, after a seemingly endless process of torment and discovery, I connected to the network using a Raspberry Pi. My first cjdns node was up.

After this, I made a push to revive the Philly Mesh project. I constructed a new website, revived some of the services, and started my push for finding community involvement. Though it stands to be a slow process, things are coming together and people are coming forward. Whether or not we will have a thriving mesh network in the future is unknown, but the journey in this case interests me just as much as the destination.

As of now, I’m embracing wireless mesh as a hobby. I still have a pile of routers to play with and test firmware on, and am getting new hardware every so often. As for the bricked TP-Links, I’ve picked up USB/TTL adapter in an attempt to correct my wrongdoings and get cjdns set up properly. I’m also constantly playing with my settings on the Raspberry Pi installation as I have to firewall things off, assure reliability for an application crash, and generally make sure things are running smoothly. Additionally, I’ve been toying around with different technologies to set up an access point through the Raspberry Pi such as a USB/Ethernet adapter to bridge a connection between an old router and the Pi, and a USB dongle to create an access point in a more direct model. Aside from the Raspberry Pi and assorted routers, I’m also interested in getting cjdns installed and configured on plug computers like the Pogoplug and single board computers like the BeagleBone Black.

Where will all of this take us? Hopefully this is a stepping stone on the way to building a thriving local mesh, but the future is unknown. I’d love to get some nodes set up wirelessly within the city, but I’m only one person out in the suburbs tinkering away. While I’m sitting here learning about setting up devices, I only hope to share what I find with others who might benefit from having someone else carve out an initial path. I, by myself, can work to build a local mesh but it wouldn’t be nearly as robust or expansive as if I worked within a team sharing ideas and experience.

If you’re reading this, you have the interest. You may not have the know-how, the money for high-tech equipment, or a location nearby other potential operators, but you have the desire. If there’s anything that I’ve learned throughout my ongoing mesh adventure, it’s that good things take time and nothing happens overnight.

Tomorrow, we can work to build a strong mesh for our city. As for today, why don’t we get started?

Tags: , , , , , , , , , , , , , , ,

Helping Aaron – A Vintage Computer Adventure


This article was originally written for and published at Philly 2600 on December 23rd, 2013. It has been posted here for safe keeping.

It’s rare that I get overwhelmed. I’m not talking about stress or anything like that. It’s rare that my senses get overwhelmed, specifically my sense of sight. This past Saturday, that sense became overloaded.

I’ve known Aaron for a little while now. We met online somehow in 2012, and while I don’t remember the exact details, I think he started following me on Twitter and things went on from there after I followed him back and we started replying to each other’s tweets. We quickly figured out that we lived pretty close to  one another, which I found humorous considering we were both into archiving and preservation. Who would think that I’d be geographically this close to another person who idles in the #archiveteam IRC channel, online headquarters for the team dedicated to rescuing any and everything in the way of data? Aaron and I hit it off pretty well, and we eventually ended up meeting (somewhat unexpectedly) at Pumpcon 2013. Later, I ran into him again at the BSides Delaware conference and shortly thereafter he started coming to the Philly 2600 meetings which I’ve been frequenting for some time.

About two weeks ago, Aaron approached me via an online message and asked if I would like to go through some old computers at a local nonprofit he is on the Board of Directors for, NTR. NTR is in itself a fantastic organization which provides both refurbished computers (done in-house from donations) and hands-on computer training to low-income Philadelphia residents. If you are employed by or know a company in the area that is retiring their current fleet of workstations, consider donating the old machines to NTR. And, if they ultimately cannot use the machines, they will ensure that they are recycled in an environmentally safe fashion.

Aaron thought that I would be the right guy to help out. Being someone that preserves old technology, rescues it from unknown fate, and is a general enthusiast about it, I couldn’t resist the urge to come out and see what I could uncover. The details I got about what I was to do left a lot to my imagination. I got a location, we  settled on a time, and I was told to wear clothes I wouldn’t mind getting dirty and bring a set of work gloves. Hardhats would be provided.

The dirt and grime never bother me. Just what I would be working with, I didn’t know. But, I was excited nonetheless and on the morning of Saturday I walked on over to NTR and met Aaron out front. The building we would go on to enter was the former site of the hackerspace The Hacktory before they moved to a larger location. The building itself is a big old warehouse that is much larger inside than it looks from the street. The parking lot to the side is encased with giant stone walls almost as high as the building itself and easily fits a dozen cars without having anybody blocked in. Aaron tells me that the building has also been declared a historical site, meaning they can’t do a lot of modification to it directly, but they do keep it nicely maintained.

As Aaron lifts one of the giant metal doors encased in the building’s western wall, I get my first look into NTR. He shows me bins of donated computer equipment: smaller stuff like peripherals lovingly stacked in re-purposed milk crates and small amounts of desktop computers stacked together up the side of the two-story wall. I get a tour of all the classrooms, a look into the computer thrift store they run out of the same building, and dozens of other rooms and hallways that wind around the giant space, separated by heavy opaque sliding doors. Eventually we make our way into the main computer storage area where there are pallets upon pallets of donated machines on giant shelves that Aaron points out to me with a flashlight. It’s dark in this part of the building.

We then go up to the second floor to see Stan, who is the Executive Director Emeritus of the organization, having initially been the Executive Director starting in 1980 and taken on the Emeritus title more recently. Stan himself is energetic and charismatic and goes on to tell me about how he set up a community information store on South Street in the 1970′s as we head down to where we came in to the building to the relatively new looking wooden steps that will lead to the area that Aaron and I will be looking through for the next few hours. Aaron later explains that much like me, Stan has been collecting and preserving technology and computer history, though he has been doing it for considerably longer. Some of his collection is also mixed in with the stuff we will be digging through.

I put on my gloves and snag a hardhat out of milk crate on a shelf by the stairs before Aaron and myself head up. The stairs are steep and don’t seem to be spaced consistently. You feel like you could fall down them easily but the railing is firm enough to keep you steady. As we make it to the top, I peer into the sea of computers which I will be acquainting myself with, lit by a pair of metal lamps that are clipped on to the wide beams of the underside of the roof – an afterthought in this 40×20 foot space.

A shot behind me after I made my way off the stairs

A shot behind me after I made my way off the stairs

I quickly realize I can’t stand up all the way and have to hunch over, but that isn’t nearly as assaulting as the dust that comes out from seemingly everywhere and permeates through the air thick like smoke. Aaron walks slowly forward with his flashlight in hand and I follow close behind as he points out different areas of the space. We see newer stuff like a few Dell servers and stacks of Intel-based PCs at first but as we go further in we take more steps back in time. Aaron shines his light on a pile of all-in-one Macs before going further to the more interesting artifacts. On the left are some more modern machines, followed by boxes upon boxes of various documents, computers, and peripherals. I see Kaypros with Commodores with IBM clones and crazy displays for systems I can’t even fathom. There are tons of Macs, a few Mac clones, Apple ][s, and some old portable computers the size of suitcases. There are bags of electronics: half finished projects from decades before, muddled in with 8-bit personal computers, a pile of Sun workstations, and boxes of 5.25" floppy disks. On the right side are more Macs: G5s, G3s, a dozen classic Macs, some older desktops and a seemingly endless collection of obscure monitors and terminals to other systems. This is where we start.

A view of the left side

A view of the left side

A claustrophobic shot of the beginnings of the right side

A claustrophobic shot of the beginnings of the right side

We navigate down the narrow path separating the space straight through the middle and get acquainted with the Mac area. We line up rows of milk crates and start digging, sorting along the way. Put the classic Macs here, put modems in this bin, mice in that bin, terminals over here, MIPS-based hardware over there. We sort and sort and sort, moving the heavy machines slowly as we work another path into the mess. The day was a cold one, but we quickly discarded our jackets as we carried hardware along the narrow aisle we carved out; we were warm enough simply moving back and forth, ducking beneath low hanging beams and swiveling around waist-high stacks that created our own personal obstacle course. As we went, we stopped to appreciate anything interesting we happened to find. Almost immediately we come across a monitor for a NeXTcube (though we didn’t find the cube itself) and we dug up other odd monitors and software packages and interesting little add-on boards that most people have probably long forget. We pooled our expertise and our energy and sorted in a long sprint.

After we cleared a new path

After we cleared a new path

Cleared path continued

Cleared path continued

Aaron told me that a lot of this stuff will ultimately be cleared out. The newer stuff didn’t necessarily belong there and could be assimilated downstairs or recycled while the less valuable systems would be readily sold at their retail store. Some of the rarer pieces would be donated to museums or sold to enthusiasts and collectors who appreciate them to  ensure their longevity. I hope when the time comes I might fit into this last group. The amount of history in this room is simply breathtaking.

View from the far corner

View from the far corner

After a brief break, we pushed back against the section we were using for trash so we had more room to sort. Ultimately, we successfully cleared space more terminals and bins upon bins of manuals – hard copies are always under-appreciated. We then moved around, more slowly, to some of the more obscure hardware – testing a few things as we went. More time in this stretch was just spent digging as opposed to organizing. We wanted to see what was in some of the giant boxes at the bottoms of the stacks. We didn’t want to leave any stone unturned. Who knows what would be tucked away? We sorted through some IBM clones, found an Amiga 2500, a Wang Terminal, a Vector Monitor, a Silicon Graphics Indy, a whole mess of Kaypros and some more interesting items like a computer for those with disabilities and a strange keyboard or computer that neither of us could quite figure out. Down below us, people were trickling in for a computer class in one of the many rooms. “Who here has internet access at home?” I heard an instructor ask before I accidentally knocked over a PowerPC Mac. Hopefully they didn’t mind the noise.

Delta Data IV "Cherry." Keyboard or 8-bit computer?

Delta Data IV “Cherry.” Keyboard or 8-bit computer?

SGI Indy

SGI Indy

Stack of Altos 580's on some Kaypros next to a Commodore 128

Stack of Altos 580′s on some Kaypros next to a Commodore 128

We finally succumbed to the tech and called it quits for the day. We got a good idea of what was up in the area and talked about the next steps which are likely to be inventorying and testing (though there can probably be some more organization in the meantime). The space itself serves as a fantastic time capsule and it is a breath of fresh air to know that some of the stuff in there is just in there – and in good condition. However, there is much to be done and many more hours to devote to make sure everything is handled properly.

As we rounded out the end of our excavation, we threw down the hardhats and unhanded the once-clean work gloves before walking around the corner for a cup of coffee. As we took our first steps away from the building, I felt a sense of accomplishment. We were archaeologists returning from our first day at an excavation. We uncovered some great finds, having fun along the way.

With any luck, I’ll be asked back. There’s a lot to go through and I can’t help but think that there’s more I can offer. Never before had I been able to lay my hands on some classic pieces of hardware that I had only read about, and it was quite an experience being able to put the pieces together.

Univac / Sperry Rand keyboard

Univac / Sperry Rand keyboard

“Age means nothing today,” Stan told me earlier that morning. “In this day and age, things are moving so fast.” I can’t say that I disagree, but I consider myself lucky to have the experience and knowledge under my belt when it comes to vintage computers.

And with any hope, I can keep expanding it.

 

 

A shot of the left side from out path in the Mac section

A shot of the left side from out path in the Mac section

Another shot of the left side

Another shot of the left side

Some newer Intel-based PCs

Some newer Intel-based PCs

More of the Mac area

More of the Mac area

Newer computers tucked away

Newer computers tucked away

More Macs, pink note states that this Mac was the second produced

More Macs, pink note states that this Mac was the second produced

Sun workstations, Macs, Apples, old laptops

Sun workstations, Macs, Apples, old laptops

RadioShack diskettes. Think the warranty is still good?

RadioShack diskettes. Think the warranty is still good?

5.25" diskettes

5.25″ diskettes

Close-up of the Altos 580's

Close-up of the Altos 580′s

A lone Kaypro II

A lone Kaypro II

Wang terminal

Wang terminal

A Tandy and a terminal

A Tandy and a terminal

The Amiga 2500 and an Apple monitor

The Amiga 2500 and an Apple monitor

Unknown brand keyboard

Unknown brand keyboard

Vector display

Vector display

Timex personal computer

Timex personal computer

Another Kaypro II and a Kaypro 10

Another Kaypro II and a Kaypro 10

Tags: , , , , , , , , , , , , , , , , , , ,

Hacking History – A Brief Look Into Philly’s Hacking Roots


This article was originally written for and published at Philly2600 on November 4th, 2013. It has been posted here for safe keeping.

The tech scene in Philadelphia is booming. We have local startups like Duck Duck Go and TicketLeap, and we have co-working spaces like Indy Hall and Philly Game Forge. We have hackathons like Apps for Philly Transit and Start-up Weekend Health, and we have hackerspaces like Hive 76 and Devnuts. We have user groups like PLUG and PSSUG, and we have conferences like Fosscon and PumpCon. We have events like Philly Tech Week and TEDxPhilly, and we have security meet-ups like PhillySec and, yeah, Philly 2600. The hacker spirit is alive and well in the city of brotherly love, but where did all of this pro-hacker sentiment come from? What came before to help shape our current tech-centric landscape?

It’s surprisingly difficult to approach the topic from the present day. I haven’t been there since the beginning, and the breadcrumbs left over from the era are few and far between. We are left with hints though, but usually from more analog sources. The first issue of 2600 that includes meeting times is volume 10, issue 2, from 1993. Philly 2600 is listed here with numerous others (making the meeting at least 20 years old), but how long did the meeting exist before this? We also know that Bernie S., longtime 2600 affiliate, was the founder of the Philadelphia 2600 chapter. Other than that, there is little to find on paper.


IMG_0871

First listing of the Philadelphia 2600 meeting in 2600 Volume 10, Issue 2 (1993).

But what else can we dig up? We do have some other little tidbits of information that apply themselves to the history of Philly 2600. The film Freedom Downtime (2001) has some footage taking place at Stairway #7 of 30th Street Station, the original meeting location. There are also mentions of the meeting in the book Hacker Diaries: Confessions of Teenage Hackers (2002), where one story places a student at the 30th Street meeting in the late 1990′s. More recent references, such as the current 2600 magazine meeting listings have the meeting location moved to the southeast corner of the food court – the location used previous to the current location some 50 feet away.


Mention of Philadelphia 2600 meeting from The Hacker Diaries: Confessions of Teenage Hackers (2002).

Mention of Philadelphia 2600 meeting from The Hacker Diaries: Confessions of Teenage Hackers (2002).

But what about the people who attended? It’s hard to keep track of this aspect, and as time goes on people come and go. Some come for one meeting and are never seen again, but some stick around a while. Eventually, there are no remains of the previous group – the meeting goes through generations. We can get a little information from simple web searches. Old Usenet listings can be a great source for material, here’s a Philadelphia 2600 meeting announcement from 1995 by The Professor. Even more interesting, here’s a Phrack article by Emmanuel Goldstein (publisher of 2600) talking about how he and three others brought Mark Abene (Phiber Optik) to the Philly 2600 meeting before having to drop him off at federal prison in Schuylkill.

Using Internet Archive’s Wayback Machine, we can get an interesting perspective on the members from ten years ago by visiting an archived version of the old website (also at this domain). This is actually something we can explore. It appears that as of mid 2002 to regulars were JQS, Kepi Blanc, Damiend LaTao, Dj`Freak, The Good Revrend Nookie Freak, and GodEmperor Daeymion. Before this, regulars included Satanklawz (former site admin at the time) and Starkweather before the site was passed on to Kepi Blanc. The archived website offers an incredible amount of information such as a WiFi map of the city, several papers, and even (incredibly tiny thumbnails of) meeting photos. It’s clunky and full of imperfections but this website offers a time-capsule-like look into Philly 2600′s past.


The old Philly 2600 logo

The old Philly 2600 logo

But what about other hacker origins in the area?

We know of Pumpcon, one of the USA’s first hacker conferences started in 1993 (almost as old as DEFCON). Pumpcon has been running for over 20 years with an invite-only status. It is often overshadowed and left in the dust by the larger conferences in the country, despite its stature as one of the first of its kind. Pumpcon has not been exclusively held in Philadelphia since its inception. The conference has previously been held in Greenburgh, New York and Pittsburgh. Pumpcon has no central repository of information (why would it?) but a lot of history can be found scouring the web through old ezine articles like this one about Pumpcon being busted and notices like this one announcing Pumpcon VI. I’m currently compiling as many of these resources as I can, but there is an immense amount of data to sift through. Below I have some hard copy from my collection: A review of Pumpcon II from the publication Gray Areas and the incredibly recent Pumpcon 2012 announcement.


Pumpcon II Review (Page 1/2) from Gray Areas Vol. 3 No. 1 (1994)

Pumpcon II Review (Page 1/2) from Gray Areas Vol. 3 No. 1 (1994)


Pumpcon 2012 Announcement

Pumpcon 2012 Announcement

Other groups are harder to find. Numerous groups started up, burned brightly, and were then extinguished. Who knows where those people are now or the extent of what they accomplished. There are of course a few leftovers. One of my own pet projects is the development of an archive of older hacker magazines. A previously popular publication in particular, Blacklisted! 411, sheds a little light on some long-lost Philly hackers. A few issues make reference to Blacklisted! meetings taking place at Suburban Station in Philadelphia and another at the Granite Run Mall run by thegreek[at]hygnet[dot]com (long defunct) in neighboring Delaware County (and surprisingly about five minutes from my house). The earliest occurrence of these meetings I can find of this is in volume 3, issue 3 from August 1996 but either may have started earlier.


Philadelphia/Media Blacklisted meeting listings from Blacklisted! 411 Vol. 3, Issue 3 (1996).

Philadelphia/Media Blacklisted meeting listings from Blacklisted! 411 Vol. 3, Issue 3 (1996)

There are a few other loose ends as well. The recent book Exploding The Phone (2013) by Phil Lapsley catalogs the beginnings of the phreak culture, and makes reference to several fone phreaks in PA, some more notable than others, including Philadelphia native David Condon and some unidentified friends of John Draper (Cap’n Crunch) around the time he was busted by Pennsylvania Bell. We additionally know that some of the main scenes in the previously mentioned Freedom Downtime were filmed in Philadelphia. We also know that there are were hundreds of hacker bulletin board systems in the area from the 1980′s through the 1990′s.


Bell Pennsylvania joke advert, from Exploding the Phone (2013)

Bell Pennsylvania joke advert, from Exploding the Phone (2013)

Let’s change gears now. Our main problem in moving forward is what we do not know. Stories and events have been lost as time goes one, and the hopes of finding them becomes dimmer with each passing year.

If you had some involvement with the Philadelphia hacking scene in the years past, tell someone. Talk to me. Let me interview you. Get your story out there. Share your experiences – I’m all ears.

Those of you out there hosting meetings and starting projects, keep a record of what you’re doing. This is my one request.

We’ve already lost a lot of history. Let’s try saving some.

Tags: , , , , , , , , , , , , , , , ,

Ghost in the Machine: Your Digital Afterlife


This article was originally written for and published at The New Tech on July 9th, 2013. It has been posted here for safe keeping.

rip-200x108

On January 11th, Aaron Swartz passed away. If you’re not familiar with who he was and what he did, take a minute right now and look him up. A lot of focus was put on the circumstances of his death along with what he accomplished in life, and this seems to overshadow something that stood out to me: how to handle his legacy. Specifically, how he wanted his legacy to be handled.

Swartz created a simple web page in 2002 about how to handle things if he were to be “hit by a truck.” Who would take over his website? Where would his source code end up? He created an electronic will. The idea of a will is nothing new. Most people create a document outlining how their assets will be divided up when the time comes- it just makes things operate more smoothly. But what about in the electronic world? Surely we mark who will get our house but what about who gets our website? It sounds amusing to think about or even entertain the idea. We allocate or physical property, things that can be defined in dollars and cents, but hardly consider our intellectual property.

If you haven’t had a Facebook friend pass away yet, you’re likely in the minority. It’s sad, of course it’s sad, but it needs to be talked about. If you have ever had a Facebook friend pass, you may observe a cycle where his/her profile is used first as a memorial and then eventually deactivated all-together. These are my experiences. I understand and empathize with the feelings of the family in these circumstances, but to me this seems a little like burning all of a loved one’s possessions with the ease of a single mouse click.

These are the two ends of the spectrum.

It’s important to let go and move on, but it’s also important to remember and honor. In a basic sense, I apply the same fundamental ideas towards the death of a person as I do towards that of a technology. Most are quick to push the old out of thought, but the few make a move to preserve. I preserve. It’s just my nature.

Swartz’s situation resonated with my own beliefs. If I were to be hit by a truck tomorrow, what would happen with my stuff? My digital stuff. I run a fair number of websites, I rent a VPS and a dedicated server, and I have bills, Amazon S3, service subscriptions. If I go, they eventually do too.

I’d like to tell you that I have a contingency plan, but I don’t. I haven’t reflected fully on the logistics of it. Could I think of people to take over my digital stuff after I’m gone? Of course, but would they want to? When someone dies and it becomes your responsibility to handle their belongings, it’s not typically a drawn out process. You keep some stuff, you toss some stuff, but you don’t normally end up with something that needs to be maintained and worked through. Websites take a fair amount of time and money. Storage, while getting cheaper, is still expensive for the hobbyist. There’s unavoidable maintenance.

That said, I would hope that my online persona remains long after I do. Forum accounts, Facebook information, Twitter posts, etc. should survive as long as possible. I want everything to be available to anyone who needs it. Hand over my source code and pick apart my log files.

Open source my life.

If I’m not going to work on it anymore, I’d like to give that ability to anybody who is interested.

To have these things removed, stripped from the world, is just nonsensical to me. Someone’s interesting and original work disappearing because nobody knows how to or doesn’t want to handle handle it? Nothing upsets me more than something like that. It’s akin to tearing pages out of every history book. In our modern world, people are quick to think that things last forever. Digital artifacts can go missing overnight.

We shouldn’t be worried so much anymore about having something embarrassing stored online forever, we should be more worried about something important disappearing tomorrow.

Tags: , , ,

Mining Bitcoin for Fun and (Basically No) Profit, Part 4: Aftermath


If you have not done so already, please read parts 1, 2 & 3 of this series.

As of writing this, I’ve spent one week running my setup with one USB Block Eruptor and one week running my setup with three. In my first week, I received about two payouts of 0.01 bitcoin each while in the second week I received that payout almost daily.

The current average Bitcoin rate in USD (as of this writing) is $144.99322. This means my payout, one hundredth of that value, is $1.4499322. Now, this doesn’t sound like too bad of a payout. However, there is a lot to consider when figuring out whether or not I will actually make any money off of this in the long run.

First, we have to consider that the price of a bitcoin is constantly fluctuating. When I started this project, the exchange rate was ~$119.00 USD. This amount could change at any time as the value inflates or deflates. Next, we have to consider the change in mining complexity – as more people start mining, the harder it will be. This is not only a problem of competition, the difficulty of generating a block increases systematically every 2016 blocks (roughly two weeks) Thus, as time goes on, you’ll make less money.

Aside from these variable rates, we have some constants to think about. The initial investment wasn’t enough to break the bank, but it wasn’t anything to ignore.

Recall our initial build list, this time with some prices:

  • 1 x Raspberry Pi ($35 + $4.98 shipping = $39.98)
  • 1 x ~4GB SD Card ($5.01 + $0 shipping = $5.01)
  • 1 x Micro USB Cable ($2.60 + $0 shipping = $2.60)
  • 1 x Network Cable ($5.49 + $0 shipping = $5.49)
  • 1 x Powered USB HUB ($19.95 + $0 shipping = $19.95)
  • n x USB Block Eruptor (($42.99 + $3.99 shipping) * 3 = $140.94)

Total = $213.97 USD

Pretty big when you put it all together, but this is worst case scenario – when you don’t start with anything. I already had most of this around the house. Besides the USB Block Eruptors, I did need to purchase a USB hub, but I wouldn’t consider this part of my investment as I needed one anyway (the project more or less gave me an excuse to get it). I’m more concerned with making back my money from the Block Eruptors, which total $140.94 USD.

Next, we should consider power requirements. Again, this doesn’t matter to me much, I’m just focused on earning back money for the USB Block Eruptors, but let’s hook the whole rig up to my Kill A Watt electricity usage monitor and see what it says.

Kill A Watt reading for kWh over 44 hours.

Kill A Watt reading for kWh over 44 hours.

The Kill A Watt states that the consumption is 0.55 kWh, this was taken over a period of 44 hours. Now let’s say our monthly electricity rate was 15 cents per kWh. We can plug all of those numbers into this handy formula: 0.55 kWh / 44 hours * 732 hours [hours in a month] * $0.15 [price per kWh] = $1.37 per month. So overall the power cost isn’t too bad, especially compared to old GPU rigs.

Okay, now we know the power consumption, have our initial costs, are mindful of the changing rates, etc. How do we put it all together?

The Genesis Block has created the Mining Dashboard just for this sort of thing. We can plug in all of our information here and see what’s what. They do have some fields for power, but that doesn’t take into account the Raspberry Pi and the hub. Plug in what matters to you. You cannot retroactively compute values, so I’ll have to base my start in September. However, this doesn’t take into account that I’ve already mined $9.96 (in the current exchange rate), so I’ll subtract that from my investment of $140.94 to get $130.98. It’s a dirty workaround, but this is an estimate after all. After putting in all the values, hit ‘Calculate.’ Here are my results:

My Mining Dashboard projection.

My Mining Dashboard projection.

From the projection, I will never break even and will forever be $44 in debt because my setup will be completely obsolete in around 10 months time.

Now as I said, this is a projection but it’s likely closer to being accurate than it is to inaccurate. I likely won’t make my money back unless the value of a bitcoin continues to rise and/or the mining complexity grows at a slower rate (which is unlikely).

I’m not the only one in this boat. As more and more powerful ASIC rigs are being produced, the window for profit gets smaller and smaller. Some new ASICs sold now won’t even be able to turn any profit for owners because the time between ordering and arrival leaves too small a window to mine back the initial investment at the current complexity.

While it is unfortunate to (likely) not turn a profit, this still proved to be a fun and incredibly interesting project. I may not have come out of it with financial wealth, but the ability to look down at my little Raspberry Pi chugging away (actually turning electricity into money, who knew?) was completely worth the time and effort I put into it. I’ll likely end up sitting on the bitcoins I mine now for a little while, just like I did back when my wallet got its first deposit. I’m more infatuated with mining and collecting the currency than I am with spending it, at least for right now..

Hopefully you, one way or another, have learned something from my little journey.

I know I did.

Tags: , , ,

Mining Bitcoin for Fun and (Basically No) Profit, Part 3: Mobile Development


If you have not done so already, please read parts 1 and 2 in this series.

So I have a mining rig that’s successfully rewarding me with bitcoins. Normal people would probably stop at this point. One nice thing about mining in Slush’s Pool is that it has a handy email notification option that tells you when credit is being transferred to your Bitcoin wallet. This is pretty cool, but what if I want more in-depth information? For example, what if I want to know my hash rate, or if my miner is alive (did the system crash?) or how many bitcoins I have total?

The next step for me was to create a mobile application which could provide all this information – whenever or wherever I wanted it. So, I got to work.

The platform I chose to work with was Android. A logical choice for me as I had prior experience developing Android applications and own an Android phone myself. Programming for Android, as many know, means programming in Java. If you have any prior Java experience, you’re already have a head start if you ever wanted to get into Android development.

A fantastic thing about Slush’s Pool is that it offers an API (Application Programming Interface) which allows users to pull down information on their miners using the de facto JSON format. So from this I can get at my mining information, but what else do I want? I decided it would be wise to pull down the average value of a bitcoin in USD, at any given moment. This way, I can do some simple calculations to determine a rough estimate of how much I’m generating and getting payed in USD. Lastly, I wanted to get the balance of my Bitcoin wallet, again to be displayed in both Bitcoin and USD.

I already had the API information for Slush’s Pool, as it is linked on everyone’s profile and accessed via a common base url and unique key for each user. Here is an example of the JSON output for my account:


{
    username: "Famicoman",'
    rating: "none",
    confirmed_nmc_reward: "0.00000000",
    send_threshold: "0.01000000",
    nmc_send_threshold: "1.00000000",
    confirmed_reward: "0.00145923",
    workers: {
        Famicoman.worker1: {
            last_share: 1378319704,
            score: "70687.6038",
            hashrate: 1004,
            shares: 1906,
            alive: true
        }
    },
    wallet: "1DVLNHpcoAso6rvisCnVQbCFN8dRir1GVQ",
    unconfirmed_nmc_reward: "0.00000000",
    unconfirmed_reward: "0.00612688",
    estimated_reward: "0.00046390",
    hashrate: "1006.437"
}

Next, I needed an API for the average value of a bitcoin in USD. I went on the hunt. Finally, I found that Mt. Gox, the largest Bitcoin exchange, has a public API for bitcoin rates (located at this ticker URL). This works perfectly for my needs. Here is some sample JSON output from this API:


{
    result: "success",
    return: {
        avg: {
        value: "143.97798",
        value_int: "14397798",
        display: "$143.98",
        display_short: "$143.98",
        currency: "USD"
        }
    }
}

So far, so good.

Lastly, I wanted wallet information. I discovered that Blockchain shows records of transactions (as they are all recorded in the block chain), so I did some probing and found they also offered an API for attributes of individual wallets (Here’s a link using my wallet info, it’s all public anyway). This includes balance, transactions, etc. The units for bitcoins here is the Satoshi, one millionth of a bitcoin. Some sample JSON output from this service looks like so:


{
    hash160: "88fd52ba00f9aa29003cfc88882a3a3b69bfd377",
    address: "1DVLNHpcoAso6rvisCnVQbCFN8dRir1GVQ",
    n_tx: 7,
    total_received: 8434869,
    total_sent: 0,
    final_balance: 8434869,
}

So now I had the APIs I was going to use and needed to put them all together in a neat package. The resulting Android application is a simple one. Two screens (home and statistics) with a refresh button that pulls everything down again and recalculates any necessary currency conversions. Android does not allow you to do anything system-intensive on the main (UI) thread anymore, so I had to resort to using an asynchronous task that spawns a new thread. This thread is where I pull down all the JSON (in text form) and get my hands dirty manipulating the data. I utilize a 3rd party library called GSON to parse the data I need from the JSON string. Then, it’s just a little bit of math and we have all the necessary data. After all of that is done, the application prints everything on the screen. Pretty basic, and with plenty of room for potential additions.

When running the application, provided there’s network connectivity and all the servers are up, you will be rewarded with a screen like this:

The app in action

The application in action

Not too shabby. If you wanted to use it yourself, it would be necessary to hard-code your own key from Slush’s pool. There doesn’t appear to be an API call by username (by design), so it needs to be implemented manually at some point (which happens to be in code as of right now).

The source for this application, which I call SlushPuppy, is freely available on GitHub. Feel free to fork it, or just download and mess around with it. If anything, it provides a small example of both Android-specific programming as well as API interaction.

Tags: , , , , , ,

Mining Bitcoin for Fun and (Basically No) Profit, Part 2: The Project


If you have not yet, please read the first article in this series: Mining Bitcoin for Fun and (Basically No) Profit, Part 1: Introduction

I have a few Raspberry Pis around my house that I like to play with – four in total. Prior to this idea of a Bitcoin project, I had one running as a media center and another operating as a PBX. Of the remaining two, one was an early model B with 256MB of RAM while the other was the shiny new revision sporting 512MB. I wanted to save the revised model for the possibility of a MAME project, so I decided to put the other, older one to work. It would help me on my quest to mine bitcoins.

Raspberry Pi Model B

Raspberry Pi Model B

But what is mining exactly you may ask? Bitcoin works on a system of verified transactions achieved through a distributed consensus (a mouthful I know). Every transaction is kept as a record on the Bitcoin block chain. Mining is more or less the process of verifying a “block” of these transactions and appending them to the chain. These blocks have to fit strict cryptographic rules before they can be appended, else blocks could be modified and invalidated. When someone properly generates one of these blocks, the system pays them in a certain amount of bitcoins (currently 25). This process repeats every ten minutes.

I knew that at this stage in the mining game I had to go with an ASIC setup and I new I wanted to run it off of my Raspberry Pi. Simple enough. The Raspberry Pi is a fantastic platform for this considering its price, power consumption, and horsepower. For mining hardware, I decided to buy the cheapest ASIC miners I could get my hands on. I found the ASICminer USB Block Eruptor Sapphire for the low price of $45 on Amazon. They cost more money on eBay and I couldn’t buy them from any sellers with Bitcoin because I didn’t have any (and didn’t want to bother with exchanges) so this seemed like the way to go. The Block Eruptor could run at ~330MHash/s, which is pretty hefty compared to GPU mining and at a fraction of the price. It is also pretty low power, using only 2.5 watts.

ASICminer USB Block Eruptor

ASICminer USB Block Eruptor

So I figured that I would get one of those, but also devised a more complete and formal parts list:

  • 1 x Raspberry Pi
  • 1 x ~4GB SD Card
  • 1 x Micro USB Cable
  • 1 x Network Cable
  • 1 x Powered USB HUB
  • n x USB Block Eruptor

That’s the basics of it. I already had the Raspberry Pi, and the necessary cables and SD card. These were just lying around. I needed to purchase a USB hub, so I bought a 7-port model for about $20. The hub needs to be powered as it will be running both the Raspberry Pi and the USB Block Eruptor. Considering power, the Raspberry Pi claims to draw somewhere around 1-1.2 Amps maximum while the USB Block Eruptor claims to draw 500 milliAmps maximum. I tested things out using my Kill A Watt and found that my setup with the USB hub, Raspberry Pi, and three USB Block Eruptors draws only 170 milliAmps and uses only 12.5 watts! So the projected power usage seems off for me, but I can’t guarantee the same results for you.

After buying my USB Block Eruptor on Amazon, I got it in about a week. The day after I got it and made sure it was working, I ordered two additional units to fill out the hub a little more.

Software

To do anything with Bitcoin, we’re first going to need a wallet and a Bitcoin address. Which wallet software you use is up to you. For desktop apps, there is the original Bitcoin-qt, MultiBit, and other third-party wallets. There are mobile applications like Bitcoin Wallet for Android, and even web-based wallets like BlockChain that store your bitcoins online. Figure out what client works best for you and use it to generate your Bitcoin address. The address is a series of alphanumeric characters that act as a public key for anyone to send bitcoins to. This address is also linked to a private key, not meant to be distributed, which allows the address holder to transfer funds.

In order to use the USB Block Eruptor, we’re going to need mining software. One great thing about using the Raspberry Pi as a platform is that someone has already made a Bitcoin mining operating system called MinePeon, built on Arch Linux. The distribution combines existing mining packages cgminer and BFGminer with a web-based GUI and some nice statistical elements. You’re going to need to download this.

To copy the operating system image file onto the SD card for your Raspberry Pi, insert the card into your computer and format it with the application of your choosing. Since I did this with a Windows system, I used Win32 Disk Imager. It is fairly straight forward: choose the image file, choose the drive letter, hit write, and you’re done.

Win32 Disk Imager

Win32 Disk Imager

Okay, software is ready. Now to set up the Raspberry Pi, insert the SD card into the Pi’s slot. To power the Pi, plug your Raspberry Pi into one of the USB ports via Micro USB cable. Then, plug the USB hub into one of your Raspberry Pi’s free USB ports. Any USB Block Eruptors you have can be plugged into the remaining USB ports on the hub (not the Raspberry Pi directly), but keep heat flow in mind as they get pretty hot. Next, connect the Raspberry Pi to your home network. Finally, plug your hub’s power cord in and let the Pi boot up.

To go any further, you will need to determine your Raspberry Pi’s IP address. If your router allows for it, the easiest way is to log in to it and look for the new device on your network list. Alternatively, plug a monitor or television into your Raspberry Pi and log directly into the system using minepeon as the user and peon as the password. From there, run the ifconfig command to retrieve the internal IP address.

Navigate to MinePeon’s web interface by typing the IP address in your browser. You’ll have to log in to the web interface using the previously defined credentials: minepeon as the user and peon as the password. You should be presented with a screen similar to this (But without the graphs filled in):

MinePeon Home Screen

MinePeon Status Screen

If you receive an in-line error about the graphs not being found, don’t worry. You just need to get mining and they will generate automatically, making the message go away.

Configuration

In order to utilize the software, you will now need to register with one or more Bitcoin mining pools. Mining pools work using distribution. The pool you are connected to will track the progress of each user’s attempt at solving a block for the block chain. On proof of an attempt at solving the next block, the user is awarded a share. At the end of the round, any winnings are divided among users based on how much power (how many shares) they contribute. Why use a mining pool at all? Payout usually only happens for one user when a block is solved. It can be very difficult for a user to mine a bitcoin, even after months of trying as the odds of success are always the same. Mining independently offers the opportunity of a giant payout at some point, but pooled mining offers smaller, more regular payouts.

Mining pools can differ greatly in how the payout is divided. I’d advise that you do some research as to which method works best for you. Alternatively, you could go about setting up your own mining pool, but I wouldn’t advise it without a substantial amount of processing power unless you’re willing to wait for a payout (if it even comes at all).

Anyway, I chose two mining pools: Slush’s pool (my primary) and BTC Guild (my fail-over in case my primary is down).

Registering with a mining pool is as simple as registering with any other website. After completing registration, you will be supplied with a worker name/password and the server address. These credentials can then be pushed into the MinePeon Pools configuration like so:

MinePeon Pool Settings

MinePeon Mining Pools

Next go to the Settings page and change your password for the MinePeon web interface (it’s a good idea), the timezone (this is buggy right now and won’t look like it’s working on the settings page) and any time you want to donate to the MinePeon maintainer (if any).

MinePeon Settings

MinePeon Settings

If everything is configured correctly, within a few hours (or instantly), you should see some activity on your MinePeon Status screen. Additionally, be sure to check your account on the mining pool you signed up with to make sure everything is working as expected.

My MinePeon Pool & Device Status

My MinePeon Pool & Device Status

My Slush's Pool Worker Status

My Slush’s Pool Worker Status

Now, just sit back and let your machine go to town. The only thing you have to do at this point is make sure the USB hub continues to get power (don’t let anyone unplug it) and it should run continuously. On your first day or two, it may take a while before your status update and payout will take even longer.

My mining rig, hub side

My mining rig, hub side

My mining rig, Raspberry Pi side

My mining rig, Raspberry Pi side

Most mining pools offer status tracking for your payout, so you should be able to see how things are progressing fairly quickly. As of this article being published, I receive a payout of 0.01 Bitcoin near every 24-30 hours while running three USB Block Eruptors.

Tags: , , , , , , , ,

Mining Bitcoin for Fun and (Basically No) Profit, Part 1: Introduction


Note: This article is the first entry in a series I am writing for Philly2600.

If you’re anything like myself, you’ve been keeping loose tabs on Bitcoin over the years. When I first read about the cryptocurrency, I thought it was an awesome concept. Now, I had heard about electronic currencies before. The first mental link I made upon hearing about Bitcoin was that it reminded me of e-gold. Founded in 1996, years before Paypal, e-gold was a gold-backed digital currency created by a few guys in Florida. e-gold was the de facto currency for underground transactions, and was recently referenced in Kevin Poulsen’s Kingpin as the choice of the carder market – the collection of online outlets to buy and sell credit card information.

egold's logo

egold’s logo

Though it was forged near the tail of e-gold’s run and adopted a similar concept, Bitcoin turned out to be a different beast entirely. While it is still favored for underground transactions, closely integrated with controversial websites like The Silk Road, the currency had striking differences that allowed it to come into its own. Bitcoin is distributed (read peer-to-peer), decentralized, and considered fiat money (as opposed to representative money). There is no central authority to go to with legal matters, you cannot simply flick a switch and shut down the network, and the currency only has value because we give it value – it isn’t backed against gold or silver or another currency.

Bitcoin also has an interesting history. The identity of the creator, who goes by the name Satoshi Nakamoto, is still unknown to the public. Many theories have come up to who the man behind Bitcoin really is. Some speculations range from an academic team to a government agency to a reclusive cryptographer. If you want to see more speculation, there’s an interesting Vice article about the whole thing I’d recommend checking out.

Background

In 2011, I got interested enough in Bitcoin to set up my own wallet, download the block-chain, and set up little donate buttons on a blog or two. The donations never rolled in (and why would they), but my fascination with the technology did. The idea of a monetary system that worked sort of “like BitTorrent” not only held my attention because of the possibility of financial success but also because it made me feel like I was at the forefront of something cool and exciting. I pictured scenes straight out of Serial Experiments Lain or Neuromancer with a dingy apartment somewhere in a dense city. A patchwork of tangled computer cables linking unknown and mysterious hardware together to just run and create money for me while I’m out. Nothing ever sounded both so cyberpunk and actually possible (though probably not as bleakly artistic).

At the time, CPU mining was on its way out as GPU mining was taking over. The internet was flooding with pictures of enthusiasts’ mining rigs. Case-less computers, motherboards with a large amount of PCI slots, each filled with a top-of-the-line graphics card. One of these setups was big, hot, messy, expensive, and beautiful. Usually a person would have a few of these chaotic mining machines all running in the same room and they caught the cyberpunk feel I so badly wanted to create for myself. I wanted the hectic rat’s nest of wires and satisfaction of a successful rig build.

The "Super Rig"

The “Super Rig”

I never got that far. Building a machine to do this was an expensive process and I didn’t want to put a huge investment on the line when I was operating on a limited budget in the first place. So, ultimately, I steered away from mining as a whole.

That didn’t turn me off from the whole technological concept though. I did end up surveying the field to see what people were using Bitcoin for. The possibilities seemed endless. Aside from sales of underground goods, I saw there was gambling, web hosting, and even retails sales (including some coffee shops). Pretty much any type of business that could accept Bitcoin was starting to have outlets that accepted the digital currency. I did what any Bitcoin novice did: got my 0.005 BTC from BitFountain for free (now defunct), and sat on it. No use doing anything with it. One bitcoin was worth around $8 USD at the time, so I had about four cents.

After the GPU mining wave, I next saw the FPGA generation. FPGA stands for Field Programmable Gate Array and is pretty much self-explanatory. An FPGA has a hardware array of logic gates like your typical AND or XOR operations. Sequences of logic gates can be put together to form half-adders and multiplexers and eventually processors (when you chain enough smaller components together). Normally, you would have all of these components pre-determined into some type of integrated circuit called an ASIC (standing for Application Specific Integrated Circuit) which are designed and programmed only for certain unique tasks. Think of an FPGA as a breadboard for the final ASIC design. Both the FPGA and ASIC are programmed in an HDL (hardware description language) such as VHDL or Verilog (or any other ones you might remember from a System Architecture class). Unlike your typical object-oriented or scripting languages, an HDL is more suited for the Electrical Engineer instead of the Software Engineer (me). An HDL allows you to create models and interactions of hardware components as though you had them available physically.

XILINX Spartan-3E FPGA

XILINX Spartan-3E FPGA

As you’d guess, FPGAs were a favorite for Bitcoin mining enthusiasts. Developers would program the boards to mine Bitcoin and leave behind anyone still pushing their GPUs to the limit. For me, FPGAs were still a massive investment. Though likely not as much as an outfit of new GPUs, coupled with enough electricity to power a small town, the amount of money for an FPGA was still a few hundred dollars. On top of that, I’d still have to dust off some of my class notes and program the thing. It would have been fun and a great learning experience but at the time I didn’t want the hastle. Besides, something better was coming soon anyway.

In the summer of 2012, I started discussing with my co-workers the feasibility of having us set up a Bitcoin mining operation. The whole concept was relatively simple: we were all going to throw money in for a new USB connected ASIC chip and run it off of a computer of our own. We did the math to figure out power consumption, our initial investment, mining complexity increase, etc. and the numbers for our break-even point looked pretty good. The company were were looking at for our miner was Butterfly Labs, who boasted they could provide a chip with an incredible hash rate at only a few hundred dollars. Split between a few people, it didn’t seem like too bad of a deal. Then, we started looking into the company. They were plagued with manufacturing delays. When you couple the time delay with the growing mining complexity, your return takes much longer. Couple that with the fact that Butterfly hadn’t delivered anything yet, the whole thing could have been someone’s pie-in-the-sky idea or giant scam. We decided to shut down our little plan and save ourselves the aggravation. This ended up being a wise decision. Butterfly Labs continued to be plagued by delays and people ended up auctioning off their pre-orders. There are still not that many Butterfly Labs ASIC chips out in the wild, even now.

Butterfly Labs ASIC Miner

Butterfly Labs ASIC Miner

After all this, I still wanted to try my hand at Bitcoin mining.

I knew that I wasn’t going to make a lot of money, but I thought it would be fun. If it made me any money, any at all, that would be something. So I got to work doing a little research.

Tags: , , , , , , ,