Recollecting The Future— An Omni Retrospective

This article was originally written for and published at Neon Dystopia on June 28th, 2017. It has been posted here for safe keeping.

The first time I read anything out of Omni, I probably had a completely different experience than you did. The original run of Omni, the iconic science and science fiction magazine, ran in print from 1978 to 1995, ending when I was just four years old. The first time I eyed the pages was maybe only five years ago — on my computer screen, with issues batch-downloaded from the Internet Archive back when you could find them there. While I didn’t get that feeling that comes with curling, sticky-with-ink pages between my fingers, or the experience of artificial light reflecting off of the glossy artwork and into my retinas, I was able to ingest the rich content all the same. While there is something sterile and antiseptic about reading magazine scans from a computer, there is more to say about reading Omni, in particular, this way; what the readers have been dreaming about since the publication’s inception has become reality. I have a whole archive of Omni issues that I can take with me everywhere in my pocket. To reference an oft-used quote from Ben Bova, previous editor of Omni, “Omni is not a science magazine. It is a magazine about the future.” The future is now, and maybe some things did end up changing for the better.

The iconic Omni logo.

Omni (commonly stylized as OMNI) got its start in a rather interesting way back in the late 1970s. Publisher Kathy Keeton, who had previously founded Viva (1973), an adult magazine aimed at women, and held a high-ranking position at the parent company for Penthouse (1965), proposed an idea for a new type of scientific magazine to Penthouse founder and her future husband, Bob Guccione. Keeton and Guccione would develop the concept of a magazine focusing on science, science fiction, fantasy, parapsychology, and the paranormal. It was a departure from both the over-the-top, pulp science fiction magazines of the 1940s and stiff academic scientific journals of the time aimed at pipe-smoking professionals in three-piece suits. Omni was aimed more at the layperson in that its content was accessible yet serious — they bought into their own brand and expected you to as well.

Guccione and Keeton, image via

A strange departure from the pornographic roots of both Keeton and Guccione, Omni was a completely new beast, eccentric and untried. While other science publications looked to ground their content in what was concrete, Omnifocused on the future with wonder and a sense of possibility. An issue may contain irreverent, gonzo articles about alien abductions, chemical synthesis of food, what personality traits should be given to robots, thoughts on becoming a cyborg, or the computer-centric musings of lunatic/genius Ted Nelson. You could find articles on drugs back-to-back with discussions of high-tech surgical procedures or homebrew aeronautics. No topic was off limits, and with decent rates for writers, the weird had a chance to turn pro. Whether it was intentional or not, Omni adopted a laid-back, transgressive west coast culture that praised the strange and favored the “out there.”  For a lot of readers in the late 1970’s, this was the only place to get this type of content. People couldn’t pull out a cell phone and hop online to get the vast swathes of information that we can now – there was an ever-present undertone of information isolation, and Omni filled the void.

Omni‘s premier issue, October 1978.

While the magazine was well known for its articles and comprehensive interviews with the scientific elite such as Future Shock (1970) author Alvin Toffler or astrophysicist Carl Sagan, the most flirtatious quality of Omni was its illustrations. Slick and glossy, Omni never failed to draw a shy eye from a newsstand with bright colors and airbrushed art of feminine androids or lush mindscapes in high contrast. Many notable artists contributed their work to Omni, including John Berkey, H.R. Giger, and De Es Schwertberger. For each issue, the gold-chain-wearing Guccione would be said to personally pick the featured art; each image was part of what he wanted to convey with the magazine, and it might be more than a coincidence that many of the covers featured women. Omni wasn’t the only publication of the time with intricate cover art, as Heavy Metal (1977) featured similarly detailed depictions, and even shared some of the same artists. Where Omni really set itself apart from others was through the use of artwork throughout the periodical as a whole. With sprawling, multi-page illustrations, the art didn’t take a backseat to the articles. Just flipping through, someone might think that they had picked up an art magazine; Omni was never one to skimp on the visuals.

H.R. Giger’s Dune concept art would grace the cover of Omni‘s November 1978 issue.

Above all of the articles and artwork, Omni may have been most celebrated for its short fiction. At its inception, nobody really knew how to react to Omni’s foray into the science fiction ecosphere. The SF community was tight-knit and amiable, but it wasn’t exactly used to sleek publications from groups that had no background in the subject showing up suddenly in bookstores and comic shops. Keeton and Guccione did their homework and hired notable editors, Ellen Datlow and Ben Bova (a six-time Hugo Award winner during tenure as editor of Analog Magazine (1930)), to seek out content for publishing. Like Bova, Datlow was a fan of the genre herself and wasn’t afraid to push the envelope when it came to buying far-out fiction. The magazine came to showcase science fiction for science fiction fans, and didn’t subdue or water down the content for the sake of a broader audience – the bottom line was never sacrificed. With contributors like Robert Heinlein, Orson Scott Card, Isaac Asimov, William Gibson, Bruce Sterling, George R.R. Martin, and William S. Burroughs, there were not only excerpts from larger works but complete short stories introduced to the world for the first time. William Gibson notably published “Burning Chrome” (1982), “New Rose Hotel” (1984), and “Johnny Mnemonic” (1981) through Omni, creating the foundation for the Sprawl that would later host some of his most celebrated novels: Neuromancer (1984), Count Zero (1986), and Mona Lisa Overdrive (1988). Omni helped launch the careers of many science fiction authors, and fostered exposure for countless others over its near three-decade reign.

The first page of William Gibson’s “Burning Chrome,” featured in Omni‘s July 1982 issue.

Omni thrived for many years as it continued to dazzle readers with exciting and thought-provoking composition. Eventually, Omni would get its own short-lived television show, Omni: The New Frontier (1981), a webzine on Compuserve, and six international editions with varied amounts of both reprinted and original articles. Later in the publication’s life, the articles took a more paranormal slant that left many fans and contributors wondering about the magazine’s direction. Keeton and Guccione had always held a lot of interest in the paranormal and found a kindred spirit in their editor-in-chief, Keith Ferrell, who took up the position in Omni’s twilight years. While Ferrell wanted to shift Omni’s focus towards the vetting of the unexplained, the magazine couldn’t sustain itself for much longer. Omni published its last print issue in the Winter of 1995, citing rising production costs. At the time, Omni had a circulation of 700,000 subscribers, many of whom were left high and dry if they hadn’t already abandoned the publication with the recent shift in focus. While it is unknown whether or not production costs were to blame for Omni’s demise, many surmise Guccione’s strategy of funding Omni via Penthouse’s profits was starting to fall apart. While free Internet pornography flourished in the 1990’s, it put the older print industry in jeopardy. The ship was sinking, and something would need to be thrown overboard before the situation would get any better for the Penthouse empire.

“Some of Us May Never Die,” article by Kathleen Stein, October 1978.

Omni did not disappear completely, and successfully transitioned into an online-only magazine dubbed Omni Internet in 1996. At the time, there were few examples of magazines who could make the jump to digital. Omni embraced the new format which allowed them to play with how content was structured and draw in fans with interactive chat sessions. While a conventional magazine could only be published once a month, Omni could now report on scientific news as it happened, setting a standard in web-based journalism that we still see today. This was a “blog” before Jon Barger coined the term the following year, and it holds a spot as an early example of the format. In 1997, Kathy Keeton died due to complications from surgery and Omni closed down completely just six months later. While Omni was the brainchild of both Keeton and Guccione, Keeton had always been the main driving force that held all of the moving parts together. The publication that always promised to bring the future would now become static, slowly fading into the past with each passing year. While no new content was published, the site remained online until 2003, when Guccione’s publishers filed for bankruptcy.

AOL welcomes you to Omni Online, one of Omni‘s first forays into the web. Image via

While Omni may have ceased production, its strong legacy cannot be questioned. While fans wax rhapsodic about the old days, there are plenty of remnants left over that have diffused the iconic Omni spirit to new generations. Wired (1993) owes just as much to Omni as it does its precursor, Language Technology (1986), as well as Mondo 2000 (1989), bOING bOING (1988), and Whole Earth Review (1985). While Wired didn’t rely on science fiction, it still yearned for a techno-utopian future and even hired on some Omni expatriate short fiction writers to become reporters for the new digital revolution. Later in 2005, a much-unknown was launched, claiming to be cared for by former staff and contributors of Omni, including Ellen Datlow, the fiction editor who worked at Omni for nearly its entire run. In 2008, the website io9 was created by Gawker Media (Later a property of Gizmodo) specifically to cover science and science fiction, just as Omni had done decades before. io9’s slogan, “We come from the future,” echoes Ben Bova’s Omni quote, and helps cultivate an image of the site as a spiritual successor.

Wired issue #1 from March, 1993. The cover features an article by science fiction writer and Omni contributor Bruce Sterling.

In 2010, Bob Guccione passed away at the age of 79 following a battle with lung cancer. While Guccione’s death didn’t immediately influence the future of Omni, it set off an interesting chain of events that would ultimately lead to a rebirth of the publication. In 2012, businessman Jeremy Frommer bought a series of storage lockers in Arizona, one of which happened to contain a large amount of Guccione’s estate. Frommer was enamored by his discovery and immediately fell down the strange and multifaceted rabbit hole of Guccione’s mind. With the help of his childhood friend, producer Rick Schwartz, Frommer started building The Guccione Archive in a nondescript New Jersey building. Out of the entirety of Guccione’s life work, Frommer became fixated on Omni and explored the collection of production materials, pictures, 35-mm slides, and original notes generated from the life of the magazine. It wasn’t long before he brought in others to sort the collection and track down original artwork used for Omni that he could purchase and add to the archive. For Frommer, it became a passion to create the most complete collection of Omni-related materials, and he had to get his hands on every bit he could find.

Artwork like this Di Maccio piece showcasing”Psychic Warriors,” by Ronald M. McRae was being tracked down to add to the archive.

Organically, Frommer came to the conclusion that he needed to take the next logical step: he needed to reboot the magazine. When Claire Evans, pop singer and editor of Vice Media’s Motherboard blog, met with Frommer to cover the Omni collection, she was asked if she would be interested in working on a new Omni incarnation. In August 2013, Evans was named editor-in-chief of Omni Reboot and got to work. The new publication was off to a strong start as Evans was able to get submissions and interviews from former Omni alumni and science fiction icons like Ben Bova, Rudy Rucker, and Bruce Sterling. Free scans of Omni back issues were made available at the Internet Archive, and the blood once again flowed through Omni’s withered veins. Nostalgia was in full force as people rediscovered their favorite article or got caught up in the whole journey of the reboot. Quickly though, criticism came in about the quality of the content and what would become of the Omni legacy under the new owners. Not everyone thought the magazine should be rebooted.

An Excerpt from “Hi Everyone. Welcome to the New Omni,” by Claire Evans served as the introductory article to Omni Reboot (August 2013).

While Omni Reboot still conformed to the science and science fiction roots of the original, it did so with a higher dose of cynicism than even the original obtained. When the old fantasies of science and technology started to become reality, it was only natural for people to consider how the world around them could grow to bite its master and send civilization into a negative spiral. In many cases, it already had. In contrast, the zeitgeist 40 years ago slanted more towards the hopeful idyllic; people used to have conversations about how an advancement would influence mankind and drive the world ahead. That’s not to say everything back then was so hunky dory – we always see the past through rose-colored colored glasses, smoothing over the corruptions and jagged edges in our memories. While the world aged and evolved, so did opinions towards its technocentric trajectory. Etherealism was no longer en vogue.

As the reboot grew it started to come under fire from authors as it was discovered that submitted work would become owned by Jerrick Media, Omni Reboot’s new parent, for a year after publication. Further fueling controversy, the back issues of Omni on the Internet Archive were removed in 2015, much to the disappointment of readers. In 2016, Jerrick Media released Vocal, a publishing platform for freelance writers to make money based on article views. Omni Reboot(now just Omni) was re-purposed as one of Vocal’s verticals, serving as an interface for science and science fiction related submissions from the Vocal content network. Browsing Omni now, there is an overwhelming, uneasy feeling of quantity over quality. A name that once stood proudly and represented a home for the weird and futuristic had become just a limb on a larger, alien body.

Omni’s current homepage, via

A few weeks ago in May of 2017, we got another chance to relive the rich history of Omni as back issues became available once again. Unlike the older Internet Archive scans, these new transfers are in high resolution, with each and every issue accounted for. Also unlike the older transfers, each issue costs $2.99 for a digital copy, though they are free if you happen to be a Kindle Unlimited subscriber. While it’s bittersweet that these issues are now available for pay, profits from their sales are going to a good cause. Omni announced a partnership with the Museum of Science Fiction who receive a portion of the revenue from each purchase.

Cover of Omni‘s February 1987 issue. Just one of many back issues that are now available for purchase.

While Omni has changed many times over the years, it is still remembered as that wondrous and weird, crazy and cool publication delivered to mailboxes month after month. Omni worked best because it didn’t try to fit into an existing space, it pioneered its own; It became a destination that amalgamated the new, the scary, and the unknown. In the true Omnispirit, we have no idea what is coming for the publication as the clock ticks ahead. Will it thrive and continue on, or become another ghost in the machine, slowly obsolescing.

The future is ahead of us, and for now, we can only look to it with hope and wonder.

We can only look to it just like Omni would want us to.


Snoop Unto Them As They Snoop Unto Us

This article was originally written for and published at Exolymph on May 15th, 2017. It has been posted here for safe keeping.

Artwork by Matt Brown.

Abruptly returning to a previous topic, here’s a guest dispatch from Famicoman (AKA Mike Dank) on surveillance and privacy. Back to the new focus soon.

The letter sat innocently in a pile of mail on the kitchen table. A boring envelope, nondescript at a glance, that would become something of a Schrödinger’s cat before the inevitable unsealing. The front of it bared the name of the sender, in bright, black letters — “U.S. Department of Justice — Federal Bureau of Investigations.” This probably isn’t something that most people would ever want to find in their mailbox.

For me, the FBI still conjures up imagery straight out of movies, like the bumbling group in 1995’s Hackers, wrongfully pursuing Dade Murphy and his ragtag team of techno-misfits instead of the more sinister Plague. While this reference is dated, I still feel like there is a certain stigma placed upon the FBI, especially by the technophiles who understand there is more to computing than web browsers and document editing. As laws surrounding computers become more sophisticated, we can see them turn draconian. Pioneers, visionaries, and otherwise independent thinkers can be reduced to little more than a prisoner number.

Weeks earlier, I had submitted a Privacy Act inquiry through the FBI’s Freedom of Information Act service. For years, the FBI and other three-letter-agencies have allowed people to openly request information on a myriad of subjects. I was never particularly curious about the outcome of a specific court case or what information The New York Times has requested for articles; my interests were a bit more selfish.

Using the FBI’s eFOIA portal through their website, I filled out a few fields and requested my own FBI file. Creating a FOIA request is greatly simplified these days, and you can even use free services, such as, to generate forms that can be sent to different agencies. I only opted to pursue the FBI at this time, but could always query other agencies in the future.

The whole online eFOIA process was painless, taking maybe two minutes to complete, but I had hesitations as my cursor hovered over the final “Submit” button. Whether or not I actually went through with this, I knew that the state of the information the FBI had on me already wouldn’t falter. They either have something, or they don’t, and I think I’m ready to find out. With this rationalization, I decided to submit — in more ways than one.

The following days went by slowly and my mind seemed to race. I had read anecdotes from people who had requested their FBI file, and knew the results could leave me with more questions than answers. I read one account of someone receiving a document with many redactions, large swathes of blacked-out text, giving a minute-by-minute report of his activities with a collegiate political group. A few more accounts mentioned documents of fully-redacted text, pages upon pages of black lines and nothing else.

What was I in store for? It truly astonishes me that a requester would get back anything at all, even a simple acknowledgement that some record exists. In today’s society where almost everyone has a concern about their privacy, or at least an acknowledgement that they are likely being monitored in some way, the fact that I could send a basic request for information about myself seems like a nonsensical loophole in our current cyberpolitical climate. You would never see this bureaucratic process highlighted in the latest technothriller.

About two weeks after my initial request, there I was, staring at the letter sticking out from the mail stack on the kitchen table. All at once, it filled me with both gloom and solace. This was it, I was going to see what it spelled out, for better or worse. Until I opened it, the contents would remain both good and bad news. After slicing the envelope, I unfolded the two crisp pieces of paper inside, complete with FBI letterhead and a signature from the Record/Information Dissemination Section Chief. As I ingested the first paragraph, I found the line that I hoped I would, “We were unable to identify main records responsive to the FOIA.”

Relief washed over, and any images I had of suited men arriving in black vans to take me away subsided (back down to the normal levels of paranoia, at least). It was the best information I could have received, but not at all what I had expected. For over ten years, I have been involved in several offbeat Internet subcultures and groups, and more than a few sound reason enough to land me on someone’s radar. I was involved with a popular Internet-based hacking video show, held a role in a physical hacking group/meeting, hosted a Tor relay, experimented openly with alternative, secure mesh networks, sysop’d a BitTorrent tracker, and a few other nefarious things here and there.

I always tried to stay on the legal side of things, but that doesn’t mean that I don’t dabble with technologies that could be used for less than savory purposes. In some cases, just figuring out how something can be done was more rewarding than the thought of using it to commit an act or an exploit. Normal people (like friends and coworkers) might call me “suspicious” or tell me I was “likely on a list,” but I didn’t seem to be from what I could gather from the response in front of me.

When I turned back to read the second paragraph, I eyed an interesting passage, “By standard FBI practice and pursuant to FOIA exemption… and Privacy Act exemption… this response neither confirms or denies the existence of your subject’s name on any watch lists.” So maybe I was right to be worried. Maybe I am being watched. I would have no way of knowing. This “neither confirms or denies” response is called a Glomar, which means my information has the potential to be withheld as a matter of national security, or over privacy concerns.

Maybe they do have information on me after all. Even if I received a flat confirmation that there is nothing on me, would I believe it? What is to prevent a government organization from lying to me for “my own good”? How can I be expected to show any semblance of trust at face value? Now that all is said and done, I don’t know much more than I did when I started, and have little to show for the whole exchange besides an official request number and a few pieces of paper with boilerplate, cover-your-ass language.

If we look back at someone like Kevin Mitnick, the cunning social engineer who received a fateful knock on his hotel door right before being arrested in early 1995, we see a prime example of law enforcement pursuing someone not only for the actions they took, but the skills and knowledge they possess. Echoing Operation Sundevil, only five years prior, government agencies wanted to make examples out of their targets, and incite scare tactics to keep others in line.

I can’t help but think of “The Hacker Manifesto,” written by The Mentor (an alias used by Loyd Blankenship) in 1986. “We explore… and you call us criminals. We seek knowledge… and you call us criminals,” Blankenship writes shortly after being arrested himself. Even if I received a page of blacked-out text in the mail, would I be scared and change my habits? What if I awoke to a hammering on my door in the middle of the night? I still don’t know what to make of my response, but maybe I’ll submit another request again next year.

Knock, knock.


The Best of 2016

See the 2015 post here!

Here is my second installment of the best things I’ve found, learned, read, etc. These things are listed in no particular order, and may not necessarily be new.

This annual “Best Of” series is inspired by @fogus and his blog, Send More Paramedics.

Favorite Blog Posts Read

Articles I’ve Written for Other Publications

I’ve continued to write for a few different outlets, and still find it a lot of fun. Here is the total list for 2016.

Favorite Technical Books Read

I haven’t read as much this year as previously

  • The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary – Really cool book about early community software development practices (at least that’s what I got out of it). Also covers some interesting history on the start of time-sharing systems and move to open-source platforms.
  • Computer Lib – An absolute classic, the original how-to book for a new computer user, written by Ted Nelson. I managed to track down a copy for a *reasonable* price and read the Computer Lib portion. Still need to get through Dream Machines.

Favorite Non-Technical Books Read

Number of Books Read


Favorite Music Discovered

Favorite Television Shows

Black Mirror (2011), Game of Thrones (2011) , Westworld (2016)

Programming Languages Used for Work/Personal

Java, JavaScript, Python, Perl, Objective-C.

Programming Languages I Want To Use Next Year

  • Common Lisp – A “generalized” Lisp dialect.
  • Clojure – A Lisp dialect that runs on the Java Virtual Machine
  • Go – Really interested to see how this scales with concurrent network programming.
  • Crystal – Speedy like go, pretty syntax.

Still Need to Read

Dream Machines, Literary Machines, Design Patterns, 10 PRINT CHR$(205.5+RND(1)); : GOTO 10

Life Events of 2016

  • Got married.
  • Became a homeowner.

Life Changing Technologies Discovered

  • Amazon Echo – Not revolutionary, but has a lot of potential to change the way people interact with computers more so than Siri or Google Now. The fact that I can keep this appliance around and work with it hands free gives me a taste of how we may interact with the majority of our devices within the next decade.
  • IPFS – A distributed peer-to-peer hypermedia protocol. May one day replace torrents, but for now it is fun to play with.
  • Matrix – A distributed communication platform, works really well as an IRC bridge or replacement. Really interested to see where it will go. Anyone can set up a federated homeserver and join the network.

Favorite Subreddits

/r/cyberpunk, /r/sysadmin, /r/darknetplan

Completed in 2016

Plans for 2017

  • Write for stuff I’ve written for already (NODE, Lunchmeat, Exolymph, 2600)
  • Write for new stuff (Neon Dystopia, Active Wirehead, ???, [your project here])
  • Set up a public OpenNIC tier 2 server.
  • Participate in more public server projects (ntp pool, dn42, etc.)
  • Continue work for Philly Mesh.
  • Do some FPGA projects to get more in-depth with hardware.
  • Organization, organization, organization!
  • Documentation.
  • Reboot Raunchy Taco IRC.

See you in 2017!


It’s Warm, Like Flesh

This article was originally written for and published at Exolymph on April 28th, 2016. It has been posted here for safe keeping.

As technology evolves, the line between science and science fiction starts to blur. At one point, the thought of space travel or even micro-computing was only a dream of the future, yet it became a reality within or before our lifetimes. More and more, we find ourselves questioning if something is real or only exists in thought — a pie-in-the-sky dream of hopefuls or holdouts. We are starting to find that the future is now, whether we are ready for it or not.

A video about a modular life-form grown from human cells made its rounds on the Internet only a few weeks ago. In the video, you are first presented with a couple of slabs of meat on a stainless steel counter. Cut to a scientist who introduces you to “OSCAR”, a modular human-like organism. We see Oscar get assembled: a brain module (literally a black box of electronic components) is plugged into a heart module is plugged into a lung module is plugged into a kidney module. With each insertion, we see the creature twitch, pulsate, or squirm. Then limb modules are added and Oscar awkwardly crawls around in search of warmth.

This Cronenberg-esque video was both terrifying and fascinating. With imagery straight out of eXistenZ (1999) or Naked Lunch (1991), we watch this organic creature struggle and writhe as it gains access to new organs; this is body horror from our fever dreams and darkest nightmares. It seems real, real enough, and a large number of people believed that the video was legitimate. After making the rounds on Facebook, it was eventually discovered to be content from a science fiction web series — not a promotional video from a medical lab deep in the bowels of some no-name organization.

What does this say about the state of our society? Is it not too far-fetched to believe that someone can grow living organs, link them together, and have the resulting life-form instinctively move around the room? For years we have been influenced by news on advancements in scientific fields such as biomedical and biomolecular engineering. From the infamous WWII Soviet propaganda film Experiments in the Revival of Organisms (1940), where a dog’s head was kept alive independent of a body, all the way up to the famed 1997 Vacanti mouse, with what looks like a human ear on its back, we have been shocked and mystified by the promises of science, especially its perversions. Even today we see cables from the relatively new field of tissue engineering with scientists poring over lab-grown meat cultures to be used as food or refining bioartificial liver devices constructed from animal cells.

Are we going to see this type of work packaged and sold to the consumer in a glossy box anytime soon? I don’t think so. I will admit, it would be incredibly interesting if I could head down to my local Best Buy and pick up Samsung’s new bio-hacking kit so I could grow my own cells and build a life-form as casually as I would order Sea-Monkeys from the back of a comic book. Imagine an organic branch of littleBits, selling you packs of organ and tissue for $99.95. What about a biopunk hacker who wants to grow himself a new eyeball with better night vision? This opens things up to more political and philosophical controversy.

While the video wasn’t real, we may not be lagging too far behind the concept of a modular body, speaking technologically. As the line between fact and fiction flickers and fades, we see the potential for groundbreaking scientific advancements for the human race, and unhinged scientific experiments stemming from the simple question, “What if?”

Our world may not be ready for Oscar.

Not yet.


The Best of 2015

As a nod to @fogus and his blog, Send More Paramedics, I’ve opted to start the annual tradition of recapping the year with the best things I’ve found, learned, read, etc.

These things are listed in no particular order, and may not necessarily be new.

Favorite Blog Posts Read

Not a lot here that I can recall, but this handful stood out as good reads. Some of them I plan to refer back to in the future.

Articles I’ve Written for Other Publications

I’ve tried something different this past year and have worked to write more for others than for just myself. This has been really fun, but has reduced the total number of entries I have written this year in general. I hope to find some more outlets to contribute to with like-minded interests. I like working with small teams like this instead of bouncing ideas around with only myself.

  • Finding Forgotten Footage – An article I did for Lunchmeat Midnight Snack #4 (a print zine) about finding strange VHS tapes with home-recorded footage.
  • Automating Site Backups with Amazon S3 and PHP – An article I did for the now-defunct TechOats website (still sad about that one). As the title describes, I automated backups of my websites using Amazon S3 and a simple PHP script.
  • The New Wild West – An article for NODE about how the internet of things and the sort of always-connected culture opens things up again for a wide variety of attacks. I draw parallels to the 1980’s boom of hacker culture where a lot of stuff was just left wide open.
  • How to Run your Own Independent DNS with Custom TLDs – A tutorial I did for NODE after remembering the failure of the .p2p project and the success of OpenNIC.

Favorite Technical Books Read

I’ve been trying to read a lot more this year to cut through my growing pile of books. I’ve mainly focused on technical books, including books I’ve only been made aware of in 2015 as well as ones that have been on my shelf for years.

  • Garage Virtual Reality – An antiquated virtual reality book from the ’90s touches on a lot of interesting technology from the time, including homemade projects and technological dead ends. The perfect amount of technical instruction and cyberpunk ideas.
  • Hacking the Xbox: An Introduction to Reverse Engineering – An amazing book on reverse engineering. I picked this up around a decade ago, and it was completely over my head. At the time I dismissed it because it was already outdated with the popularity of “softmods” for the Xbox, but picking it up again it is really just a good general book on getting into reverse engineering and the focus on the Xbox is a fun nostalgic little bonus.
  • Cybernetics – A dated and likely obscure text, this book deals with the early ideas of cybernetics and expands into theory on artificial intelligence and neural networks.

Favorite Non-Technical Books Read

  • Microserfs – A fun book that follows a group of ’90s Microsoft employees as they start their own company.
  • Crypto – An incredible look into the world of cryptography, following all of the pioneers and the cypherpunk movement.
  • Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age – My favorite book of the year, a wonderfully- detailed look into the rise and fall of Xerox PARC and all of the completely fascinating things they invented.
  • The World Atlas of Coffee: From Beans to Brewing – I love coffee and this book lets you learn about all the varieties, proper brewing techniques, etc.
  • Ready Player One – A fun dystopic sci-fi book about a civilization obsesses with a treasure hunt and ’80s culture.


Number of Books Read


Favorite Musicians Discovered

  • King Tuff
  • Elle King
  • FFS – Franz Ferdinand and Sparks
  • Devo – Everyone knows “Whip It,” but I’ve been focusing on their first few albums.

Favorite Television Shows

Mr. Robot (2015), The X-Files (1993)

Programming Languages Used for Work/Personal

C, C++, Java, JavaScript, Objective-C, Python.

Programming Languages I Want To Use Next Year

  • Common Lisp – A “generalized” Lisp dialect.
  • Clojure – A Lisp dialect that runs on the Java Virtual Machine
  • Go – Really interested to see how this scales with concurrent network programming.

Still Need to Read

Computer Lib, Literary Machines, Design Patterns, 10 PRINT CHR$(205.5+RND(1)); : GOTO 10

Life Events of 2015

I became engaged to be married.

Life Changing Technologies Discovered

  • Amazon Dash Button – I hacked a $5 button to email me when I press it.
  • Ethereum – An interesting decentralized software platform. Still not entirely sure what to make of it.
  • Microsoft Hololens – I want one after seeing this video. I’ve already supported Oculus for VR, but this is winning me over for AR.

Favorite Subreddits

/r/homelab, /r/retrobattlestations, /r/cyberpunk, /r/homeautomation.

Plans for 2016

  • Get married.
  • Write more for NODE (if possible!), Lunchmeat, or other publicans I find out about.
  • Write an article for 2600.
  • Find my missing Leatherman.
  • Release a mobile app.
  • Do some FPGA projects to get more in-depth with hardware.
  • Continue to flesh out Anarchivism with videos/print.
  • Organization, organization, organization!


See you in 2016!


The New Wild West

This article was originally written for and published at N-O-D-E on August 3rd, 2015. It has been posted here for safe keeping.


A few years ago, I was fortunate enough to work professionally with low energy RF devices under a fairly large corporation. We concerned ourselves with wireless mesh networking and were responsible for tying together smart devices, like light bulbs or door locks installed in your home, into an information-driven digital conglomerate. You know those commercials you see on TV where the father remotely unlocks the door for his child or the businesswoman checks to make sure she left the patio light on? That was us. At the touch of a button on your tablet, miles away, you can open the garage door or flip on the air conditioner. These are products that are designed to make life easier.

In research and development, we view things differently than the stressed-out, on-the-go homeowner might. We don’t necessarily think about what the user might want to buy, but ask the question, “when we roll these things out, how will people try to exploit and break them?” In the confines of a tall, mirror-glass office building, my packet sniffer lights up like a Christmas tree. Devices communicate in short bursts through the airwaves, chirping to one another for all to hear. Anyone with the curiosity and some inexpensive hardware can pick up this kind of traffic. Anyone can see what is traveling over the air. Anyone can intervene.



Things weren’t so different a few decades ago. Back in the ‘70s we saw the rise of the phone phreak. Explorers of the telephone system, these pioneers figured out how to expertly maneuver through the lines, routing their own calls and inching further into the realm of technological discovery. We saw innovators like John Draper and even Steve Wozniak & Steve Jobs peeking into the phone system to see how it ticks and what secrets they could unlock. It wasn’t long before people started connecting their personal microcomputers to the phone line, lovingly pre-installed in their houses for voice communication, and explored computerized telephone switches, VAXen, and other obscure machines — not to mention systems controlled by third parties outside the grasp of good old Ma Bell.

This was the wild west, flooded by console cowboys out to make names for themselves. The systems out there were profoundly unprotected. And why not? Only people who knew about these machines were supposed to be accessing them, no use wasting time to think about keeping things secure. Many machines were simply out there for the taking, with nobody even contemplating how bored teenagers or hobbyist engineers might stumble across them and randomly throw commands over the wire. If you had a computer, a modem, and some time on your hands, you could track down and access these mysterious systems. Entire communities were built around sharing information to get into computers that weren’t your own, and more of these unsecured systems popped up every week. It seemed like the possibilities were endless for the types of machines you would be able to connect to and explore.

Today, many will argue that we focus much more on security. We know that there are those who are going to probe our systems and see what’s open, so we put up countermeasures: concrete walls that we think and hope can keep these minds out. But what about newer technologies? How do we handle the cutting edge? The Internet of Things is still a relatively new concept to most people — an infant in the long-running area of computing. We have hundreds if not thousands of networked devices that we blindly incorporate into our own technological ecosystems. We keep these devices in our homes and on our loved ones. There are bound to be vulnerabilities, insecurities, cracks in the armor.


Maybe you don’t like the idea of outlets that know what is plugged into them or refrigerators that know when they’re out of food. Maybe you’re a technological hold-out, a neo-luddite, a cautious person who needs to observe and understand before trusting absolutely. This may feel like the ultimate exercise of security and self-preservation, but how much is happening outside of your control?

When the concept of ubiquitous computing was first developed by Mark Weiser at Xerox PARC in the late ‘80s, few knew just how prominent these concepts would be in 25 years. Ubiquitous computing pioneered the general idea of “computing everywhere” through the possibility of small networked devices distributed through day-to-day life. If you have a cellular telephone, GPS, smart watch, or RFID-tagged badge to get into the office, you’re living in a world where ubiquitous computing thrives.

We’ve seen a shift from the centralized systems like mainframes and minicomputers to these smaller decentralized personal devices. We now have machines, traditional personal computers and smart-phones included, that can act independent of a centralized monolithic engine. These devices are only getting smaller, more inexpensive, and more available to the public. We see hobby applications for moisture sensors and home automation systems using off-the-shelf hardware like Arduinos and Raspberry Pis. The technology we play with is becoming more independant and increasingly able when it comes to autonomous communication. Little intervention is needed from an operator, if any is needed at all.

For all of the benefits we see from ubiquitous computing, there are negatives. While having a lot of information at our fingertips and an intuitive process to carry out tasks is inviting, the intrusive nature of the technology can leave many slow to adopt. As technology becomes more ubiquitous, it may also become more pervasive. We like the idea of a smart card to get us on the metro, but don’t take so kindly to knowing we are tracked and filed with every swipe. Our habits have become public record. In the current landscape of the “open data” movement, everything from our cell phone usage to parking ticket history can become one entry in a pool of data that anyone can access. We are monitored whether we realize it or not.


We have entered uncharted territory. As more devices make their way to market, the more possibilities there are for people to explore and exploit them. Sure, some vendors take security into consideration, but nobody ever thinks their system is vulnerable until it is broken. Consider common attacks we see today and how they might ultimately evolve to infect other platforms. How interesting would it be if we saw a DDoS attack that originated from malware found on smart dishwashers? We have these devices that we never consider to be a potential threat to us, but they are just as vulnerable as any other entity on the web.

Consider the hobbyists out there working on drones, or even military applications. Can you imagine a drone flying around, delivering malware to other drones? Maybe the future of botnets is an actual network of infected flying robots. It is likely only a matter of time before we have a portfolio of exploits which can hijack these machines and overthrow control.

Many attacks taken on computer systems in the present day can trace their roots back over decades. We see a lot of the same concepts growing and evolving, changing with the times to be more efficient antagonists. We could eventually see throwbacks to the days of more destructive viruses appear on our modern devices. Instead of popping “arf arf, gotcha!” on the screen and erasing your hard drive, what if we witnessed a Stuxnet-esque exploit that penetrates your washing machine and shrinks your clothes by turning the water temperature up?

I summon images from the first volume of the dystopian Transmetropolitan. Our protagonist Spider Jerusalem returns to his apartment only to find that his household appliance is on drugs. What does this say about our own future? Consider Amazon’s Echo or even Apple’s Siri. Is it only a matter of time before we see modifications and hacks that can cause these machine to feel? Will our computers hallucinate and spout junk? Maybe my coffee maker will only brew half a pot before it decides to no longer be subservient in my morning ritual. This could be a far-off concept, but as we incorporate more smart devices into our lives, we may one day find ourselves incorporated into theirs.


Just as we saw 30 years ago, there is now an explosion of new devices ready to be accessed and analyzed by a ragtag generation of tinkerers and experimenters. If you know where to look, there is fruit ripe for the picking. We’ve come around again to a point where the cowboys make their names, walls are broken down, and information is shared openly between those who are willing to find it. I don’t know what the future holds for us as our lives become more intertwined with technology, but I can only expect that people will continue to innovate and explore the systems that compose the world around them.

And with any hope, they’ll leave my coffee maker alone.



Philosophy of Data Organization

I would be a liar if I said I was an overly organized person. I believe that like things should be grouped together and everything is to have its place, but I follow something of a level of acceptable chaos. Nothing is organized completely, and I don’t really believe it is possible to have complete organization on a large enough scale. Complete organization is likely to cause insanity.

When I first started accumulating data, I quickly outgrew my laptop’s 80 gigabyte hard drive. From there I went to a 150GB drive, then a pair of 320GB drives, then a pair of 1TB drives, then a pair of 2TB drives, and from there I keep amassing even more 2TB drives. As I get new drives, I like to rotate the data off of the older ones and on to the newer ones. These old drives become work horses for torrents and rendering off video while new drives are used for duplicating and storing data that I really want to keep around for a long long time. The system is ad-hoc without any calculated sense of foresight. If I had the money and planning, I’d build a giant NAS for my needs. For now, whenever I need more space, I just buy another pair of drives and fill them up before repeating the cycle. This doesn’t scale very well and I ultimately around 25TB of storage scattered across various drives.

A few months ago, I was fortunate enough to take a class on the philosophy of mind and knowledge organization. A mouthful of a topic, I know, but it is more simple than it seems. The class revolved around one main concept: classification. We started with concepts put forth by Greek philosophers on how to organize knowledge via the study of knowledge: epistemology. We start out with concepts put forth by Socrates, Plato and Aristotle. Notably, university subjects were broken into the trivium (grammar, logic, and rhetoric) and later expanded with the quadrivium (arithmetic, geometry, music, and astronomy) as outlined by Plato. These subjects categorized the liberal arts, based on thinking, as opposed to the practical arts, based on doing. These classifications were standard in educational systems for some time.

The Trivium

A representation of the Trivium

Aristotle reclassifies knowledge later by breaking everything into three categories: theoretical, practical, and productive. This is again broken down further. Aristotle breaks “theoretical” into metaphysics, mathematics, and physics. “Productive” is broken into crafts and fine arts. “Practical” is broken down into ethics, economics, and politics. From here, we have a more modern approach to knowledge organization. We see distinctive lines between subjects which are further branched into more specific subjects. We also see a logical progression from theoretical to practical, and finally to productive to ultimately create a product.

An outline of Aristotle's classification

An outline of Aristotle’s classification

More modern classifications pull directly from these Greek outlines. We can observe works by Hugo St. Victor and St. Bonaventure which mash various aspects of these classifications together to create hybrid classifications which may or may not be more successful in breaking down aspects of the world.

An interpretation of St. Bonaventure's organization

An interpretation of St. Bonaventure’s organization

What does this have to do with data? Data, much like knowledge, can be organized using the same principles we have observed here. Remember, the key theme here is classification. We are not simply concerned with how to break up knowledge, but anything and everything that can be classified.

Think of all the possible ways you could organize films or musical artists, or even genres of music. It can be a daunting thing to even imagine. As an overarching project throughout the course, we developed classifications of our own choice. I chose to focus on videotape formats, and quickly created my own classification based on physical properties. I broke down tapes into open/closed reel, tape widths, and format families. While it might not be the best classification, I tried to approach the problem in a way that was open to using empirical truth (conformity through observations) in a way which would allow a newcomer to quickly traverse the classification branches to discover what format he is holding in his hands.

An early version of my videotape classification

An early version of my videotape classification

Classifications like this are not uncommon. Apart from the classifications of knowledge put forward here already, classifications have been used by Diderot and d’Alembert to create the first encyclopaedia in 1759. This Encyclopédie uses a custom classification of knowledge as its table of contents. While generalized to an extent (it does fit one page), it could be expanded upson infinitely.

Encyclopédie contents

Encyclopédie contents

A contemporary way to organize knowledge arrives in a familiar area: the Dewey Decimal System. Though Dewey’s system has been adopted globally as the de facto method for organizing print media, can can we apply this same system to our growing “library” of data? The short answer is no, not without some modification, though modifications have plagued Dewey’s system since its inception.

To understand how we can best organize our data, we must first understand the general concepts of the Dewey Decimal System. Within the system, different categories are defined by different numbers. 100 may be reserved for philosophy and psychology while 300 may be used for social sciences, and 800 for literature. the numbering system here is intentional. Lower numbers are thought to be the most important subjects while higher numbers are less important. These numbers are broken down further. 100 might be broken into 110 for metaphysics, 120 for epistemology, etc. with each of these being broken down again for more finite subjects.

This is just another classification, but it has its faults. The size of a section is finite as the system is broken up into 10 classes which are then again broken down into 10 divisions, and finally 10 sections (hence decimal). However, we never really accounted for the growth of new and expanding topics. As subjects emerge like computer science, which Dewey never could have imagined, we throw works like these into unused spaces. Computer science in particular is infamous as it now occupies location 000, which in the system would make it seem more important than any other subject in the entire system. Additionally, we see a loss in physical ties to the system as libraries are intended to to organized along with the system: lower numbers on the first floor, higher numbers on the higher floors. Dewey’s system is constantly being modified as new works emerge and finding any consistency between different libraries could be controlled by one librarian who chooses whether or not to implement a change at any given time.

A simplified example of Dewey's system

A simplified example of Dewey’s system

While a modified version of Dewey’s system might make sense for data (as well as being somewhat familiar), we have to consider another problem which plagues the classification: titles that can occupy more than one section. Suppose that I have a book about WWII music. Do I put this book in music? Does it go in history? What other sections could it fall into? We have few provisions for this.

Data is no different in this sense. Whether I have a digital copy of a book as would be found in Dewey’s system or a podcast or anything else, there is always the potential for multiple areas a work can fall into. If you visit the “wrong” section where you might expect an object to be, you don’t have any indication that it would be somewhere else just as suitable.

What are we to do in this case? While I like to break my data down into type of media (video, audio, print, etc), I find the lower levels to get more fuzzy. Let us consider a subject which I am revisiting in my own projects: hacker/cyberpunk magazines. Even if we only focus on print magazines, we still have problems. We can see the concept of “hacking” coming from more traditional clever programming origins (such as in Dr. Dobb’s Journal), or evolved from phreaker culture (such as in TEL), or maybe from general yippie counterculture (such as in YIPL). Additionally, we can see that some of these magazines feature a large number of overlapping collaborators which make them feel somewhat similar. We also may observe that magazines produced in places like San Francisco or Austin also have a similar feel but might be much closer to other works that have no physical or personnel ties. Further, what about publications that started as print and then went over to online releases? More and more possible subgroups emerge.

At this point, we might consider work put forth by Wittgenstein which is based off of the “family resemblance theory.” The basic idea behind this theory is that while many members of a family might have features that make them resemble the family, not one feature is shown in all the members who have family resemblance. Expanded, we can say that while we all know what something means, it can’t always be clearly defined and its boundaries cannot always be sharply drawn. Rosch, a psychology professor, took Wittgenstein’s concept further and hypothesized that “the task of categorization systems is to provide maximum information with the least cognitive effort.” She believes that basic-level objects should have “as many properties as possible predictable from knowing any one property.” This means that if something is part of a category, you could easily know much more about it (if you know that 2600 is a hacking magazine, you’ll know there are likely articles in it about computers). However, superordinate categories (like furniture or vehicle) wouldn’t share many attributes with each other. Rosch concluded that most categories do not have clear-cut boundaries and are difficult to classify. This goes on to show the concept that “messiness begins within.” We get a contrast from Aristotelian “orderliness” because messiness shows that we can’t put things in their place because those places are just where things “sort-of” belong. Everything belongs in more than one place, even if it is just a little bit. We see that order can be restrictive.

This raises the importance of metadata: data about data. While my media might be organized in such a classification that doesn’t allow for “double dipping” (going against concepts by Rosch), we can utilize the different properties that pertain to each individual object. Consider many popular torrent sites which utilize crowd-sourced tagging systems. Members can add tags to individual pieces of media (which can then me voted on as a way to weed out improper tags) which allow the media to show up in searches for each tag. We see a similar phenomenon in websites such as Youtube which allow tagging of videos for content, though not in a crowd-sourced sense or the Internet Archive which supports general subject tags as well as more specific metadata fields.

Using this metadata method and my previous example, it’s easy to find magazines by location, authors, subject, contents, age, and a long list of other attributes. We can apply this to objects that aren’t the same format; there are examples of video, audio, and print that pertain to the same subjects, authors, etc. This isn’t an impossible implementation. Considering further the Internet Archive, we see thousands upon thousands of metadata-rich items which are easily searchable and identifiable. However, the Internet Archive also suffers from a lackluster interface. It might be easy to find issues of Byte magazine, but it is a lot more difficult to figure out what issues we are missing or see an organizational flow more akin to a wiki system (though both systems lend themselves well to items being in more than one place). A hybridized system like this would be an option worth exploring, but I haven’t seen an ideal execution of it yet.

While this concept of a metadata-based organizational system isn’t a fool-proof solution, it can certainly be seen as a step in the right direction. We must also consider the credibility of those who decide to make contributions to metadata, especially on a large-scale public system. Consider the chaos and political makeup of how Wikipedia governs editing and then you’ll start to get an idea. While I’d like to implement a tagging system for my own personal media library (with my own tagging at first and the possibility of expansion), I am limited by my current conglomeration of hard drives scattered to different parts of the house, usually powered off. My next storage solution will take these ideas into planning and execution, making my data much easier to traverse. I will however have limitations as I won’t have many people perpetually reviewing and tagging my data with relevant information.

That said, the idea of being able to make my data more accessible is an exciting one, and increases portability of the data as a whole if I ever need to pass it on to others. As my tastes evolve and grow, so will the collection of data I hold.

With any hope, my organized chaos will ultimately become a little more organized and a little less chaotic.

With any luck, you’ll be able to browse it one day.


Ghost in the Machine: Your Digital Afterlife

This article was originally written for and published at The New Tech on July 9th, 2013. It has been posted here for safe keeping.


On January 11th, Aaron Swartz passed away. If you’re not familiar with who he was and what he did, take a minute right now and look him up. A lot of focus was put on the circumstances of his death along with what he accomplished in life, and this seems to overshadow something that stood out to me: how to handle his legacy. Specifically, how he wanted his legacy to be handled.

Swartz created a simple web page in 2002 about how to handle things if he were to be “hit by a truck.” Who would take over his website? Where would his source code end up? He created an electronic will. The idea of a will is nothing new. Most people create a document outlining how their assets will be divided up when the time comes- it just makes things operate more smoothly. But what about in the electronic world? Surely we mark who will get our house but what about who gets our website? It sounds amusing to think about or even entertain the idea. We allocate or physical property, things that can be defined in dollars and cents, but hardly consider our intellectual property.

If you haven’t had a Facebook friend pass away yet, you’re likely in the minority. It’s sad, of course it’s sad, but it needs to be talked about. If you have ever had a Facebook friend pass, you may observe a cycle where his/her profile is used first as a memorial and then eventually deactivated all-together. These are my experiences. I understand and empathize with the feelings of the family in these circumstances, but to me this seems a little like burning all of a loved one’s possessions with the ease of a single mouse click.

These are the two ends of the spectrum.

It’s important to let go and move on, but it’s also important to remember and honor. In a basic sense, I apply the same fundamental ideas towards the death of a person as I do towards that of a technology. Most are quick to push the old out of thought, but the few make a move to preserve. I preserve. It’s just my nature.

Swartz’s situation resonated with my own beliefs. If I were to be hit by a truck tomorrow, what would happen with my stuff? My digital stuff. I run a fair number of websites, I rent a VPS and a dedicated server, and I have bills, Amazon S3, service subscriptions. If I go, they eventually do too.

I’d like to tell you that I have a contingency plan, but I don’t. I haven’t reflected fully on the logistics of it. Could I think of people to take over my digital stuff after I’m gone? Of course, but would they want to? When someone dies and it becomes your responsibility to handle their belongings, it’s not typically a drawn out process. You keep some stuff, you toss some stuff, but you don’t normally end up with something that needs to be maintained and worked through. Websites take a fair amount of time and money. Storage, while getting cheaper, is still expensive for the hobbyist. There’s unavoidable maintenance.

That said, I would hope that my online persona remains long after I do. Forum accounts, Facebook information, Twitter posts, etc. should survive as long as possible. I want everything to be available to anyone who needs it. Hand over my source code and pick apart my log files.

Open source my life.

If I’m not going to work on it anymore, I’d like to give that ability to anybody who is interested.

To have these things removed, stripped from the world, is just nonsensical to me. Someone’s interesting and original work disappearing because nobody knows how to or doesn’t want to handle handle it? Nothing upsets me more than something like that. It’s akin to tearing pages out of every history book. In our modern world, people are quick to think that things last forever. Digital artifacts can go missing overnight.

We shouldn’t be worried so much anymore about having something embarrassing stored online forever, we should be more worried about something important disappearing tomorrow.


What Administrating a BitTorrent Site Taught Me About Project Management

This article was originally written for and published at Medium on May 18th, 2013. It has been posted here for safe keeping.

In my sophomore year of college, I became an administrator of a BitTorrent website. It’s not nearly as shady as it sounds. In fact, it was a small and completely legal operation. Three administrators, one server, and hard drive after hard drive full of Creative Commons-licensed content.

Now, I’m lucky enough to attend an undergraduate school with a strong internship tie-in. We spend half of the year slaving away on our school work while the other half is spent in one of those real-world jobs. Lather, rinse, repeat for three years. On the academic side, we take our specialized engineering classes, our project management classes, our technical communication classes, our how-to-work-with people classes.

I didn’t take many of those yet. They first year and change at the university is mostly populated with weed-out classes and introductory curriculum akin to a secondary school elective class or two. At this point in my life,I didn’t know what makes a good project and I didn’t know what makes a project good. I didn’t know how to communicate effectively or work as part of a group. I didn’t know about Gantt charts, or deliverables, or development practices.

As I mentioned, there were three of us. One administrator I had met via online chat some months prior in a public channel. He was a decent guy, and the linkage between myself and the mysterious third administrator who I had never spoken with but was providing us with a server. We all came together, communicating with each other in a strictly online format. Geographically separated, what did it matter with email and a few common hours when we all happened to be awake at the same time? We didn’t have structure or a real thought-out plan. No documents or task lists or meetings to touch base. We carved out and constructed bits and pieces when we felt like it and waited for each other to catch up before charging forward again full steam.

It happened to be winter break, and I had plenty of free time to devote. After we eventually got the site up and operational, I spent days filling it with uploads and tutorials, configuring and reworking plugins and style sheets, setting up social networking accounts, and more or less doing my damnedest to make it ready for prime time. Then, we got a pay-off. A file-sharing blog picked up on the site and did a piece. Within 48 hours, news spread and we had some 3,000 members. We were being reblogged and discussed in forums. We were growing by the hour.

Sounds great, huh? It wasn’t.

While we had all been united in our quest to launch a fantastic niche torrent site, we quickly split at the seams. While I tried my best to keep a steady flow of content being uploaded to seed the site for new users, the other admins didn’t seem as compelled to. One simply disappeared for weeks at a time while another decided it would be a good time to ask for donations and not do much else. Our chat sessions together got shorter and eventually vanished completely. The site stagnated except for a small group of hopefuls that were uploading and contributing, but it amounted to too little. We fell apart. We were broken.

One day, I made a passing comment to a user about how I’d like to rebuild and relaunch the site, and then found myself stripped of my administrative permissions. I contacted the one administrator I had known prior to starting the project, and he just shrugged off the situation as weird before reinstating me into the ranks. It was too late, though. I ended up deleting myself of my own accord a week later.

I completely removed myself from the project, but that doesn’t mean I left empty-handed. I departed with lessons forged from mistakes and successes. What worked, and what didn’t. I learned the need for defining a project scope and keeping open the lines of communication. I learned the importance of meeting regularly and setting goals and being assertive. I learned sacrifice and when to cut your losses and move on.

Each one of these lessons followed me as I went from internship to internship and class project to class project. Academia can teach you a good amount about how to be a developer, but falls a little short when it comes to how to work with real people in the real world.

To learn that, well—you just need to experience it.


Documentaries I’d Like to See Before I Die (Or Everyone Forgets)

I’d like to think that I have something of a second nature when it comes to whether or not there is a documentary made or in production for any of my disjointed hobbies and interests. It’s not one of those skills you showcase in your job interview, but I seem to have this knack for religiously crawling the web in search for films I think I’d enjoy. Surprisingly, and to my great pleasure, a lot of these fringe interests I posses already have films about them. Awesome. However, there are a few that simply do not- or, have a film that doesn’t satiate my particular appetite.

So, for my sanity, I made a list of the topics I’d personally like to see filmed. And, in some cases, some topics I’d probably find gratification in filming myself.

Written below is that very list. Think of this more as a way of me getting the thoughts from my head to paper as opposed to a list of full-bodied explanations and fleshed-out ideas.

Demoscene. There are already a few demoscene documentaties out there. For example, The Demoscene Documentary is about the demoscene in Finland and Moleman 2 is a demoscene documentary focusing mainly on Hungary. While these are in fact good films, they each have a specific scope. From what I gather, the demoscene can be radically different from country to country, making it difficult to understand as a whole when only presented with a few of its parts. I’d propose an episodic piece showcasing the demoscene in a variety of countries – each country having its own segment. While these existing documentaries have touched on Finland and Hungary, there are still Germany, USA, Denmark, and Norway to consider (and probably others).

Bitcoin & Digital Currency. We’ve all heard of Bitcoin by now, especially as it makes waves at it’s current high value. However, Bitcoin itself has an interesting past and makes an interesting statement. If you do any detective work about how Bitcoin came to be, you will be sucked up into a mysterious story about how nobody knows the identity of the creator or what happened to him. The conspiracy theories are vast and plenty. We also touch on the interesting issue of an unregulated worldwide currency, governments attempting regulation, bitcoin-mining malware botnets, attacks on exchanges, etc. How about how crazy some people go with their mining setups? Dozens of caseless computers fillied with graphics cards- a cyberpunk daydream turned reality. How about using FPGAs and these new ASIC rigs? Now, that’s just bitcoin. There are numerous other digital currencies out there such as the newer litecoin, or even e-gold (Created in 1996). Digital currency has been around longer than most people think.

Cypherpunk. The cypherpunk movement does for cryptograhy what the cyberpunk scene did for personal computing. While cypherpunks have been around for decades, the interest within the scene has been renewed and pushed towards the mainstream more recently. Going back to “A Cypherpunk Manifesto” and the cypherpunk mailing list, we see early discussions of online privacy and censorship, paving the way for Bitcoin, Wikileaks, CryptoParty, Tor, 3D-printing of weaponry, etc.

Usenet. Started in 1980, Usenet is a system for users to read and post messages. Usenet can be seen as the precursor to internet forums, and is much like a Bulletin Board System in theory except it is distributed among many servers instead of a central authority. As time goes on, Usenet continues to grow in bandwidth usage, now generating terabytes of traffic a day. This is mostly through binary file transfers as opposed to messages. Despite many main ISPs deciding to remove Usenet access from their internet services, many still seek out paid access.

Pirate Radio UK. While Pirate Radio USA and Making Waves do a fantastic job at covering pirate Radio in the US, I haven’t seen much of an effort to show off pirate radio in the UK. From what I’ve gathered, there are an uncountable number of pirate radio stations across the pond, and it’s a different game when compared to the US. At the peak of pirate radio’s popularity, there were near 600 stations active in the UK while there are presently 150, mostly based in London. Here’s a mini piece from Vice.

Darknet. Not in regards to file sharing. More covering the darknet as a blanket term for an independant or ad-hoc network with some sort of disconnection from the internet. Considering topics like Hyperboria and CJDNS, Tor and the Deep Web, Meshneting for fun or necesity, Tin-Can, and so-on. As the hardware becomes less expensive and more devices have networking abilities, creating a scalable network becomes a more achievable task.

Dyson. I feel that James Dyson doesn’t get as much credit as a revolutionary engineer as he deserves. Dyson focuses on improvement: taking the wheel and making it better. No pun intended, but his first success was the creation of a fiberglass wheelbarrow that used a ball instead of a wheel. Afterwards, he famously created over 1000 prototypes for a new vaccuum cleaner using cyclone technology after noticing problems with his Hoover. Dyson repeatedly uses creative thinking and pulls inspiration from unlikely sources.

Raspberry Pi. While the Raspberry Pi was not necessarilly a unique and new concept, it was certainly one of the most well executed. We have seen other incarnations of plug computers such as the Beagleboard or the Sheevaplug, but the Raspberry Pi’s addition of integrated video sets it apart. And, at the price of $30, makes it incredibly affordable. Many would argue that what makes the Pi so special is the community that has formed around it, and not necessarily the hardware that ties it together. Everyone stretches their imagination and expertise: if it can be on the Pi, it should. Aside from the community, the Raspberry Pi Foundation has been done an incredible job at cultivating the technology and inspiring the next generation of young programmers and hardware hackers.

Kickstarter. There have been documentaries in the works that focus on crowdfunding, but I’m not as interested in the crowdfunding movement as much as I am in Kickstarter the company. While Indie GoGo has been around for a longer time, they do not seem to be held together as tightly. Kickstarter seems like not only an interesting company, but one that holds itself, and those who utilize its services, to a high standard.

QUBE. Here’s an odd one for that likely nobody has heard of. QUBE was the first interactive TV station, started in 1977 in Columbus, Ohio. Residents who subscribed to the cable service received a device that looked something like a calculator that allowed them to communicate back to the station during shows. Aside from the interactive feature, QUBE was on the forefront of pay-per-view programming and special interest content. QUBE soon went bankrupt and dissolved in the early 1980s. As a bit of an aside, I think I actually tried contacting the webmaster of that site a while back to ask if I could get a copy of the “QUBE DVD” for archiving but didn’t get a reply. Let’s hope he/she runs Webalizer or Google Analytics and sees some referrer traffic. Maybe it’ll be enough to spark a conversation.

So here ends my list. While the majority of these ideas are feasible, I can’t help but think a few might end up slipping too far and too fast into obscurity before their time. Other ideas on here might be too early in their lives. Doing something now, or even within the next decade, would only show a small part of the eventual picture.

Do I expect any of these to be made? Not particularly. But you never know.

Everyone gets lucky once in a while.