Bypass Your ISP’s DNS & Run A Private OpenNIC Server (2600 Article)

Now that the article has been printed in 2600 magazine, Volume 34, Issue 3 (2017-10-02), I’m able to republish it on the web. The article below is my submission to 2600 with some slight formatting changes and minor edits.

Bypass Your ISP’s DNS & Run A Private OpenNIC Server
By Mike Dank
Famicoman@gmail.com

Introduction

With recent U.S. legislation regarding Internet privacy, we see another example of control moving away from consumers and towards service providers. Following the news of this change, many have taken a renewed interest in methods that can take back some of the control and privacy that ISPs and other organizations have slowly been chipping away.

One such service that consumers can liberate (and run) for themselves is DNS. The Domain Name System is responsible for retrieving IP addresses (like 123.45.67.89) from domain names (like 2600.com). For a simplified explanation, when you go to visit a website your machine hasn’t seen before, your machine will query a caching server that is usually owned by your ISP or a company like Google or OpenDNS. This server will return the proper IP address, if they have it cached, or query its way along a chain of DNS servers to the authoritative one controlling that domain. Once found, the IP address for the domain entered will trickle back to you and complete the initial request, allowing your machine to resolve it.

Companies that control these services have a direct look into the sites you are trying to visit. You can bet that more than just a few of them are logging queries and using them for marketing purposes or creating profiles based on who is sitting behind the keyboard at the address of origin. However, there are alternative DNS providers out there who can offer more privacy than others are willing to supply.

One such project, OpenNIC, has been operating a network of DNS servers for many years. Unlike traditional DNS providers, OpenNIC provides an alternate root to the ICANN system (which resolves traditional TLDs, top level domains like .com, .net, etc.) while maintaining backwards compatibility with them. Using OpenNIC, you can still resolve all of the same sites, but also get access to those run by OpenNIC operators, with TLDs such as .geek, .pirate, and .bbs. OpenNIC is made up of hobbyists, engineers, and tinkerers who not only want to explore the ins and outs of DNS, but also offer enhanced privacy and free domain registration for TLDs within their root! You may see OpenNIC as just-another-organization to query, but many operators are privacy-oriented, running their own servers devoid of logging and/or in countries that don’t poke around in your network traffic.

Aside from using an official OpenNIC DNS server to query your home traffic against directly, you can also set one up yourself. Using a modest VPS (512MB of RAM, 4GB of disk) hosted somewhere outside of the US (or the 14-eyes jurisdiction, if you prefer), you can subvert organizations who may be nefariously gathering information from your queries. While your server will still ultimately connect upstream to an OpenNIC server, any clients at home or on the go never will — they will only directly query your new DNS server directly.

Installation & Configuration

Setting up a DNS server is relatively easy to do with just a basic understanding of the shell. I’m running a Debian system, so some of the configuration may be different depending on the distribution you are running. Additionally, the steps below are for configuring a BIND server. There are many different DNS server packages out there to choose from, though BIND is arguably the most widespread on GNU/Linux hosts.

After logging into our server we will first want to switch to the root account to configure BIND.

$ su -

Next, we will install bind9 and DNS utilities using the package manager. This will automatically configure a (non-publicly accessible) DNS server for us to work with and various DNS tools that will aid in setting up the server (specifically, dig).

$ apt-get install bind9 dnsutils -y

Now, we will pull down the OpenNIC root hints file for BIND to use. The root hints file simply contains information about OpenNIC’s root DNS servers that control the alternative TLDs OpenNIC has to offer (as well as provide backwards compatibility to ICANN domains). On Debian, we save this information to ‘/etc/bind/db.root’ for BIND to access.

$ dig . NS @75.127.96.89 > /etc/bind/db.root

While the root hints information does not change often, new TLDs can be added to OpenNIC periodically. We will set up a cron job that updates this file once a month (you can specify this to be more frequent is you wish) at 12:00AM on the first of the month. Let’s edit the crontab to add this recurring job.

$ crontab -e

At the bottom of the file, paste the following and save, activating our job.

0 0 1 * * /usr/bin/dig . NS @75.127.96.89 > /etc/bind/db.root

Next, we will want to make some changes to the BIND configuration files. Specifically, we will allow recursive queries (so our BIND installation can query the OpenNIC root servers), enable DNSSEC validation (to verify integrity of DNS data on query to OpenNIC servers), and whitelist our client’s IP address. Edit ‘/etc/bind/named.conf.options’ and replace the contents with the following options block, making any edits as needed to specify a client’s IP address.

options {        
    directory "/var/cache/bind";

    //Allow localhost and a client IP of 1.2.3.4        
    allow-query { localhost; 1.2.3.4; };        
    recursion yes;

    dnssec-enable yes;        
    dnssec-validation yes;        
    dnssec-lookaside auto;

    auth-nxdomain no;    # conform to RFC1035        
    listen-on-v6 { any; };  //Only use if your server has an ipv6 iface! 
};

Now, we will also change the logging configuration so that no logs are kept for any queries to our server. This is beneficial in that we know our own queries will never be logged on our server (as well as queries from anyone else we might authorize to use our server at a later date) for any reason. To make this change, edit ‘/etc/bind/named.conf’ and add the following logging block to the bottom of the file.

logging {
    category default { null; };
};

Finally, restart BIND so it can use our new configuration.

$ /etc/init.d/bind9 restart

Now, make sure that our server is using itself for DNS by checking the ‘/etc/resolv.conf’ file. If it doesn’t exist already, place the following line above any other lines starting with “nameserver”.

nameserver 127.0.0.1

Testing resolution of both OpenNIC and ICANN TLDs can be done with a few simple ping commands.

$ ping -c4 2600.com 
$ ping -c4 opennic.glue

Conclusion & Next Steps

Now that the server is in place, you are free to configure your client machine(s), home router, etc. to make use of the new DNS server. Provided you have port 53 open for both UDP and TCP on the server’s firewall, you should be able to add a similar ‘nameserver’ line to the ‘/etc/resolv.conf’ file (as seen in the previous section) on any authorized client machine, using the server’s external IP address instead of the loopback ‘127.0.0.1’ address.
Instructions for DNS configuration on many different operating systems and devices are readily available from a myriad of sources online if you aren’t using a Linux-based client machine. Upon successful configuration, your client should be able to execute the two ping commands in the previous section, verifying a proper setup!

As always, be sure to take precautions and secure your server if you have not done so already. With a functioning DNS server now configured, this project could be expanded upon (as a follow-up exercise/article) by implementing a tool such as DNSCrypt to authenticate and secure your DNS traffic.

Sources

 

Recollecting The Future— An Omni Retrospective

This article was originally written for and published at Neon Dystopia on June 28th, 2017. It has been posted here for safe keeping.

The first time I read anything out of Omni, I probably had a completely different experience than you did. The original run of Omni, the iconic science and science fiction magazine, ran in print from 1978 to 1995, ending when I was just four years old. The first time I eyed the pages was maybe only five years ago — on my computer screen, with issues batch-downloaded from the Internet Archive back when you could find them there. While I didn’t get that feeling that comes with curling, sticky-with-ink pages between my fingers, or the experience of artificial light reflecting off of the glossy artwork and into my retinas, I was able to ingest the rich content all the same. While there is something sterile and antiseptic about reading magazine scans from a computer, there is more to say about reading Omni, in particular, this way; what the readers have been dreaming about since the publication’s inception has become reality. I have a whole archive of Omni issues that I can take with me everywhere in my pocket. To reference an oft-used quote from Ben Bova, previous editor of Omni, “Omni is not a science magazine. It is a magazine about the future.” The future is now, and maybe some things did end up changing for the better.

The iconic Omni logo.

Omni (commonly stylized as OMNI) got its start in a rather interesting way back in the late 1970s. Publisher Kathy Keeton, who had previously founded Viva (1973), an adult magazine aimed at women, and held a high-ranking position at the parent company for Penthouse (1965), proposed an idea for a new type of scientific magazine to Penthouse founder and her future husband, Bob Guccione. Keeton and Guccione would develop the concept of a magazine focusing on science, science fiction, fantasy, parapsychology, and the paranormal. It was a departure from both the over-the-top, pulp science fiction magazines of the 1940s and stiff academic scientific journals of the time aimed at pipe-smoking professionals in three-piece suits. Omni was aimed more at the layperson in that its content was accessible yet serious — they bought into their own brand and expected you to as well.

Guccione and Keeton, image via dailymail.co.uk

A strange departure from the pornographic roots of both Keeton and Guccione, Omni was a completely new beast, eccentric and untried. While other science publications looked to ground their content in what was concrete, Omnifocused on the future with wonder and a sense of possibility. An issue may contain irreverent, gonzo articles about alien abductions, chemical synthesis of food, what personality traits should be given to robots, thoughts on becoming a cyborg, or the computer-centric musings of lunatic/genius Ted Nelson. You could find articles on drugs back-to-back with discussions of high-tech surgical procedures or homebrew aeronautics. No topic was off limits, and with decent rates for writers, the weird had a chance to turn pro. Whether it was intentional or not, Omni adopted a laid-back, transgressive west coast culture that praised the strange and favored the “out there.”  For a lot of readers in the late 1970’s, this was the only place to get this type of content. People couldn’t pull out a cell phone and hop online to get the vast swathes of information that we can now – there was an ever-present undertone of information isolation, and Omni filled the void.

Omni‘s premier issue, October 1978.

While the magazine was well known for its articles and comprehensive interviews with the scientific elite such as Future Shock (1970) author Alvin Toffler or astrophysicist Carl Sagan, the most flirtatious quality of Omni was its illustrations. Slick and glossy, Omni never failed to draw a shy eye from a newsstand with bright colors and airbrushed art of feminine androids or lush mindscapes in high contrast. Many notable artists contributed their work to Omni, including John Berkey, H.R. Giger, and De Es Schwertberger. For each issue, the gold-chain-wearing Guccione would be said to personally pick the featured art; each image was part of what he wanted to convey with the magazine, and it might be more than a coincidence that many of the covers featured women. Omni wasn’t the only publication of the time with intricate cover art, as Heavy Metal (1977) featured similarly detailed depictions, and even shared some of the same artists. Where Omni really set itself apart from others was through the use of artwork throughout the periodical as a whole. With sprawling, multi-page illustrations, the art didn’t take a backseat to the articles. Just flipping through, someone might think that they had picked up an art magazine; Omni was never one to skimp on the visuals.

H.R. Giger’s Dune concept art would grace the cover of Omni‘s November 1978 issue.

Above all of the articles and artwork, Omni may have been most celebrated for its short fiction. At its inception, nobody really knew how to react to Omni’s foray into the science fiction ecosphere. The SF community was tight-knit and amiable, but it wasn’t exactly used to sleek publications from groups that had no background in the subject showing up suddenly in bookstores and comic shops. Keeton and Guccione did their homework and hired notable editors, Ellen Datlow and Ben Bova (a six-time Hugo Award winner during tenure as editor of Analog Magazine (1930)), to seek out content for publishing. Like Bova, Datlow was a fan of the genre herself and wasn’t afraid to push the envelope when it came to buying far-out fiction. The magazine came to showcase science fiction for science fiction fans, and didn’t subdue or water down the content for the sake of a broader audience – the bottom line was never sacrificed. With contributors like Robert Heinlein, Orson Scott Card, Isaac Asimov, William Gibson, Bruce Sterling, George R.R. Martin, and William S. Burroughs, there were not only excerpts from larger works but complete short stories introduced to the world for the first time. William Gibson notably published “Burning Chrome” (1982), “New Rose Hotel” (1984), and “Johnny Mnemonic” (1981) through Omni, creating the foundation for the Sprawl that would later host some of his most celebrated novels: Neuromancer (1984), Count Zero (1986), and Mona Lisa Overdrive (1988). Omni helped launch the careers of many science fiction authors, and fostered exposure for countless others over its near three-decade reign.

The first page of William Gibson’s “Burning Chrome,” featured in Omni‘s July 1982 issue.

Omni thrived for many years as it continued to dazzle readers with exciting and thought-provoking composition. Eventually, Omni would get its own short-lived television show, Omni: The New Frontier (1981), a webzine on Compuserve, and six international editions with varied amounts of both reprinted and original articles. Later in the publication’s life, the articles took a more paranormal slant that left many fans and contributors wondering about the magazine’s direction. Keeton and Guccione had always held a lot of interest in the paranormal and found a kindred spirit in their editor-in-chief, Keith Ferrell, who took up the position in Omni’s twilight years. While Ferrell wanted to shift Omni’s focus towards the vetting of the unexplained, the magazine couldn’t sustain itself for much longer. Omni published its last print issue in the Winter of 1995, citing rising production costs. At the time, Omni had a circulation of 700,000 subscribers, many of whom were left high and dry if they hadn’t already abandoned the publication with the recent shift in focus. While it is unknown whether or not production costs were to blame for Omni’s demise, many surmise Guccione’s strategy of funding Omni via Penthouse’s profits was starting to fall apart. While free Internet pornography flourished in the 1990’s, it put the older print industry in jeopardy. The ship was sinking, and something would need to be thrown overboard before the situation would get any better for the Penthouse empire.

“Some of Us May Never Die,” article by Kathleen Stein, October 1978.

Omni did not disappear completely, and successfully transitioned into an online-only magazine dubbed Omni Internet in 1996. At the time, there were few examples of magazines who could make the jump to digital. Omni embraced the new format which allowed them to play with how content was structured and draw in fans with interactive chat sessions. While a conventional magazine could only be published once a month, Omni could now report on scientific news as it happened, setting a standard in web-based journalism that we still see today. This was a “blog” before Jon Barger coined the term the following year, and it holds a spot as an early example of the format. In 1997, Kathy Keeton died due to complications from surgery and Omni closed down completely just six months later. While Omni was the brainchild of both Keeton and Guccione, Keeton had always been the main driving force that held all of the moving parts together. The publication that always promised to bring the future would now become static, slowly fading into the past with each passing year. While no new content was published, the site remained online until 2003, when Guccione’s publishers filed for bankruptcy.

AOL welcomes you to Omni Online, one of Omni‘s first forays into the web. Image via http://14forums.blogspot.com

While Omni may have ceased production, its strong legacy cannot be questioned. While fans wax rhapsodic about the old days, there are plenty of remnants left over that have diffused the iconic Omni spirit to new generations. Wired (1993) owes just as much to Omni as it does its precursor, Language Technology (1986), as well as Mondo 2000 (1989), bOING bOING (1988), and Whole Earth Review (1985). While Wired didn’t rely on science fiction, it still yearned for a techno-utopian future and even hired on some Omni expatriate short fiction writers to become reporters for the new digital revolution. Later in 2005, a much-unknown omnimagazine.com was launched, claiming to be cared for by former staff and contributors of Omni, including Ellen Datlow, the fiction editor who worked at Omni for nearly its entire run. In 2008, the website io9 was created by Gawker Media (Later a property of Gizmodo) specifically to cover science and science fiction, just as Omni had done decades before. io9’s slogan, “We come from the future,” echoes Ben Bova’s Omni quote, and helps cultivate an image of the site as a spiritual successor.

Wired issue #1 from March, 1993. The cover features an article by science fiction writer and Omni contributor Bruce Sterling.

In 2010, Bob Guccione passed away at the age of 79 following a battle with lung cancer. While Guccione’s death didn’t immediately influence the future of Omni, it set off an interesting chain of events that would ultimately lead to a rebirth of the publication. In 2012, businessman Jeremy Frommer bought a series of storage lockers in Arizona, one of which happened to contain a large amount of Guccione’s estate. Frommer was enamored by his discovery and immediately fell down the strange and multifaceted rabbit hole of Guccione’s mind. With the help of his childhood friend, producer Rick Schwartz, Frommer started building The Guccione Archive in a nondescript New Jersey building. Out of the entirety of Guccione’s life work, Frommer became fixated on Omni and explored the collection of production materials, pictures, 35-mm slides, and original notes generated from the life of the magazine. It wasn’t long before he brought in others to sort the collection and track down original artwork used for Omni that he could purchase and add to the archive. For Frommer, it became a passion to create the most complete collection of Omni-related materials, and he had to get his hands on every bit he could find.

Artwork like this Di Maccio piece showcasing”Psychic Warriors,” by Ronald M. McRae was being tracked down to add to the archive.

Organically, Frommer came to the conclusion that he needed to take the next logical step: he needed to reboot the magazine. When Claire Evans, pop singer and editor of Vice Media’s Motherboard blog, met with Frommer to cover the Omni collection, she was asked if she would be interested in working on a new Omni incarnation. In August 2013, Evans was named editor-in-chief of Omni Reboot and got to work. The new publication was off to a strong start as Evans was able to get submissions and interviews from former Omni alumni and science fiction icons like Ben Bova, Rudy Rucker, and Bruce Sterling. Free scans of Omni back issues were made available at the Internet Archive, and the blood once again flowed through Omni’s withered veins. Nostalgia was in full force as people rediscovered their favorite article or got caught up in the whole journey of the reboot. Quickly though, criticism came in about the quality of the content and what would become of the Omni legacy under the new owners. Not everyone thought the magazine should be rebooted.

An Excerpt from “Hi Everyone. Welcome to the New Omni,” by Claire Evans served as the introductory article to Omni Reboot (August 2013).

While Omni Reboot still conformed to the science and science fiction roots of the original, it did so with a higher dose of cynicism than even the original obtained. When the old fantasies of science and technology started to become reality, it was only natural for people to consider how the world around them could grow to bite its master and send civilization into a negative spiral. In many cases, it already had. In contrast, the zeitgeist 40 years ago slanted more towards the hopeful idyllic; people used to have conversations about how an advancement would influence mankind and drive the world ahead. That’s not to say everything back then was so hunky dory – we always see the past through rose-colored colored glasses, smoothing over the corruptions and jagged edges in our memories. While the world aged and evolved, so did opinions towards its technocentric trajectory. Etherealism was no longer en vogue.

As the reboot grew it started to come under fire from authors as it was discovered that submitted work would become owned by Jerrick Media, Omni Reboot’s new parent, for a year after publication. Further fueling controversy, the back issues of Omni on the Internet Archive were removed in 2015, much to the disappointment of readers. In 2016, Jerrick Media released Vocal, a publishing platform for freelance writers to make money based on article views. Omni Reboot(now just Omni) was re-purposed as one of Vocal’s verticals, serving as an interface for science and science fiction related submissions from the Vocal content network. Browsing Omni now, there is an overwhelming, uneasy feeling of quantity over quality. A name that once stood proudly and represented a home for the weird and futuristic had become just a limb on a larger, alien body.

Omni’s current homepage, via omni.media

A few weeks ago in May of 2017, we got another chance to relive the rich history of Omni as back issues became available once again. Unlike the older Internet Archive scans, these new transfers are in high resolution, with each and every issue accounted for. Also unlike the older transfers, each issue costs $2.99 for a digital copy, though they are free if you happen to be a Kindle Unlimited subscriber. While it’s bittersweet that these issues are now available for pay, profits from their sales are going to a good cause. Omni announced a partnership with the Museum of Science Fiction who receive a portion of the revenue from each purchase.

Cover of Omni‘s February 1987 issue. Just one of many back issues that are now available for purchase.

While Omni has changed many times over the years, it is still remembered as that wondrous and weird, crazy and cool publication delivered to mailboxes month after month. Omni worked best because it didn’t try to fit into an existing space, it pioneered its own; It became a destination that amalgamated the new, the scary, and the unknown. In the true Omnispirit, we have no idea what is coming for the publication as the clock ticks ahead. Will it thrive and continue on, or become another ghost in the machine, slowly obsolescing.

The future is ahead of us, and for now, we can only look to it with hope and wonder.

We can only look to it just like Omni would want us to.

 

‘More Than Just A Game’– Brainscan Review

This article was originally written for and published at Neon Dystopia on May 22nd, 2017. It has been posted here for safe keeping.

In the mid-1990’s, we saw cinema saturated with cyberpunk movies. Some became staples of the genre, while others simply faded into obscurity. Brainscan (1995) combines elements of cyberpunk and horror, to create a unique story about a video game gone wrong. While not a particularly successful film, Brainscan turns out to be an interesting extension to the classic “killer computer” trope. At a time when owning a computer has become normal for your average household, it’s easy to ponder how things could go wrong, especially in the style of a Twilight Zone fever dream.

Upon watching, Brainscan instantly makes you feel uneasy with its instrumental score, and it is something you will notice over the course of the entire film. The electronic music quickly fills any scene with dread, reminiscent of 1978’s “Halloween Theme Main Title” (from the film of the same name) by John Carpenter. Coupled with a dark color palette, the mood is set and unchanging. While the film definitely exhibits a horror-themed, fanciful atmosphere, make no mistake about its ties to the cyberpunk aesthetic. Technology, and our inherent fear in embracing it (as a species), sits center stage for the full run of the story. Likely influenced by publications of the time such as Mondo 2000, we get a taste of the cyberdelic lunatic fringe.

When our protagonist Michael (Edward Furlong) is told about a horror-filled video game that interfaces with a player’s subconscious, he’s eager to call up the phone number printed in a Fangoria advertisement to place an order. His appetite for blood and gore is insatiable, but the game isn’t what he bargained for. Furlong gives an excellent performance, tapping into some of the same themes exhibited by his Terminator 2 character, John Connor, a few years earlier. Michael himself easily fills the role of the smart high school student. He has a voice-controlled computer, Igor, which he primarily uses to place and take phone calls. Red-light-drenched computer hardware spans a large section of his attic bedroom, including an interface to his video camera which he uses to spy on the neighbor-cum-love-interest, Kimberly (Amy Hargreaves). You may call him an introvert, or a misfit — he has few friends and even less family but seems close to his best friend, Kyle (Jamie Marsh).

The bedroom of my teenage dreams.

Upon calling Brainscan, Michael is told about the innovative, fully-customized game and gets electrically shocked before the mysterious voice on the other end of the line tells him that his game has been decided and the first disc will be mailed to him. At this point, I liken Michael to transgressive characters of other films, such as Thomas Anderson / Neo of The Matrix (1999) or David Lightman of WarGames (1984), who transcend connotations of the 1950’s idyllic and its three-piece psychology. In fact, I’d stress that there are a lot of parallels between this film and WarGames — the technophile wiz-kid finds a mysterious phone number which ultimately wreaks havoc on his life. When the first Brainscan disc shows up in the mail, things take a dark turn when Michael boots it up and is instantly transported into the game, seemingly immersed by the “subconscious interface” which creates something akin to an out-of-body experience. In a first-person view, Michael is instructed by a methodical, disembodied voice to commit a murder, which he dutifully complies with. When Michael sees the news the following day, he quickly realizes that the murder was real. The virtual reality has leaked over into the natural world, and the line between them starts to become fuzzy.

The first disc arrives.

At this point, Michael gets a visit from the Trickster (T. Ryder Smith), who materializes out of thin air in his bedroom. Trickster, resembling an undead hard rock front-man, seems to know everything about Brainscan and urges Michael to keep playing until he completes the entire game. If Michael has been a middling character up until now, toeing the line between a rebel and a complaisant, we see him reevaluate his morality as the Trickster begins to impose more and more chaos in Michael’s life. Michael isn’t only pitted against Trickster, but the game itself which Trickster seems to be connected to. He’s lashing out against the technology that has betrayed him but is helpless in combating it.

The Trickster will threaten you with a CD.

*Spoilers Ahead*

Michael is worried about getting caught and reluctantly agrees to keep playing Brainscan in order to tie up any loose ends that may implicate him in his crime. Feeling like he has no other options, Michael continues to kill before standing up to Trickster and refusing to commit a final act. Disappointed by Michael’s act of defiance, Trickster allows the police to shoot and kill Michael. Michael has finally broken out from the game and immediately wakes up in his chair just as he did after playing for the first time. The whole experience was just part of the game, and none of it actually took place. Realizing this, Michael completely destroys all of his computer equipment, symbolically freeing himself of his technological imperative and escapist tendencies.

Complete destruction.

While Brainscan doesn’t incorporate many cyberpunk visuals, it certainly captures the essence of cyberpunk itself. Much like Black Mirror’s “Playtest” episode, we see a main character who seems to detach from reality and chooses to escape into technology to chase a high that he cannot derive from the physical world. While Michael may not fit seamlessly into a generation of young adults who have complete access to information and media, he exhibits traits that make him susceptible to escapist behavior and the dangers that go with it. Exploring his infatuation with high-tech entertainment, we become aware of what can happen when fantasies overlap with reality. Can we expect similar real-world stories to come forward as virtual reality worlds become more immersive and realistic? Maybe we don’t even have to go that far. Is it a stretch to consider that players of conventional video games have difficulty disassociating with them after putting down the controller? Little stops these experiences from bleeding over into our day-to-day lives, especially for the self-actualizing who may be hesitant or unable to make the distinction.

Are you ready to play?

While the premise of Brainscan certainly fits within the cyberpunk genre, the film is not without its shortcomings. Furlong and Smith give notable performances, though the supporting characters are more one-dimensional. At 22 years old, the physical technology we see is laughably dated at times, despite the topical concerns brought up by the story. Brainscan would never win any awards, but it’s a fun guilty pleasure that you want to keep watching. Just make sure you don’t order any video games out of a magazine anytime soon.

Brainscan – 6/10

 

Snoop Unto Them As They Snoop Unto Us

This article was originally written for and published at Exolymph on May 15th, 2017. It has been posted here for safe keeping.

Artwork by Matt Brown.

Abruptly returning to a previous topic, here’s a guest dispatch from Famicoman (AKA Mike Dank) on surveillance and privacy. Back to the new focus soon.


The letter sat innocently in a pile of mail on the kitchen table. A boring envelope, nondescript at a glance, that would become something of a Schrödinger’s cat before the inevitable unsealing. The front of it bared the name of the sender, in bright, black letters — “U.S. Department of Justice — Federal Bureau of Investigations.” This probably isn’t something that most people would ever want to find in their mailbox.

For me, the FBI still conjures up imagery straight out of movies, like the bumbling group in 1995’s Hackers, wrongfully pursuing Dade Murphy and his ragtag team of techno-misfits instead of the more sinister Plague. While this reference is dated, I still feel like there is a certain stigma placed upon the FBI, especially by the technophiles who understand there is more to computing than web browsers and document editing. As laws surrounding computers become more sophisticated, we can see them turn draconian. Pioneers, visionaries, and otherwise independent thinkers can be reduced to little more than a prisoner number.

Weeks earlier, I had submitted a Privacy Act inquiry through the FBI’s Freedom of Information Act service. For years, the FBI and other three-letter-agencies have allowed people to openly request information on a myriad of subjects. I was never particularly curious about the outcome of a specific court case or what information The New York Times has requested for articles; my interests were a bit more selfish.

Using the FBI’s eFOIA portal through their website, I filled out a few fields and requested my own FBI file. Creating a FOIA request is greatly simplified these days, and you can even use free services, such as getmyfbifile.com, to generate forms that can be sent to different agencies. I only opted to pursue the FBI at this time, but could always query other agencies in the future.

The whole online eFOIA process was painless, taking maybe two minutes to complete, but I had hesitations as my cursor hovered over the final “Submit” button. Whether or not I actually went through with this, I knew that the state of the information the FBI had on me already wouldn’t falter. They either have something, or they don’t, and I think I’m ready to find out. With this rationalization, I decided to submit — in more ways than one.

The following days went by slowly and my mind seemed to race. I had read anecdotes from people who had requested their FBI file, and knew the results could leave me with more questions than answers. I read one account of someone receiving a document with many redactions, large swathes of blacked-out text, giving a minute-by-minute report of his activities with a collegiate political group. A few more accounts mentioned documents of fully-redacted text, pages upon pages of black lines and nothing else.

What was I in store for? It truly astonishes me that a requester would get back anything at all, even a simple acknowledgement that some record exists. In today’s society where almost everyone has a concern about their privacy, or at least an acknowledgement that they are likely being monitored in some way, the fact that I could send a basic request for information about myself seems like a nonsensical loophole in our current cyberpolitical climate. You would never see this bureaucratic process highlighted in the latest technothriller.

About two weeks after my initial request, there I was, staring at the letter sticking out from the mail stack on the kitchen table. All at once, it filled me with both gloom and solace. This was it, I was going to see what it spelled out, for better or worse. Until I opened it, the contents would remain both good and bad news. After slicing the envelope, I unfolded the two crisp pieces of paper inside, complete with FBI letterhead and a signature from the Record/Information Dissemination Section Chief. As I ingested the first paragraph, I found the line that I hoped I would, “We were unable to identify main records responsive to the FOIA.”

Relief washed over, and any images I had of suited men arriving in black vans to take me away subsided (back down to the normal levels of paranoia, at least). It was the best information I could have received, but not at all what I had expected. For over ten years, I have been involved in several offbeat Internet subcultures and groups, and more than a few sound reason enough to land me on someone’s radar. I was involved with a popular Internet-based hacking video show, held a role in a physical hacking group/meeting, hosted a Tor relay, experimented openly with alternative, secure mesh networks, sysop’d a BitTorrent tracker, and a few other nefarious things here and there.

I always tried to stay on the legal side of things, but that doesn’t mean that I don’t dabble with technologies that could be used for less than savory purposes. In some cases, just figuring out how something can be done was more rewarding than the thought of using it to commit an act or an exploit. Normal people (like friends and coworkers) might call me “suspicious” or tell me I was “likely on a list,” but I didn’t seem to be from what I could gather from the response in front of me.

When I turned back to read the second paragraph, I eyed an interesting passage, “By standard FBI practice and pursuant to FOIA exemption… and Privacy Act exemption… this response neither confirms or denies the existence of your subject’s name on any watch lists.” So maybe I was right to be worried. Maybe I am being watched. I would have no way of knowing. This “neither confirms or denies” response is called a Glomar, which means my information has the potential to be withheld as a matter of national security, or over privacy concerns.

Maybe they do have information on me after all. Even if I received a flat confirmation that there is nothing on me, would I believe it? What is to prevent a government organization from lying to me for “my own good”? How can I be expected to show any semblance of trust at face value? Now that all is said and done, I don’t know much more than I did when I started, and have little to show for the whole exchange besides an official request number and a few pieces of paper with boilerplate, cover-your-ass language.

If we look back at someone like Kevin Mitnick, the cunning social engineer who received a fateful knock on his hotel door right before being arrested in early 1995, we see a prime example of law enforcement pursuing someone not only for the actions they took, but the skills and knowledge they possess. Echoing Operation Sundevil, only five years prior, government agencies wanted to make examples out of their targets, and incite scare tactics to keep others in line.

I can’t help but think of “The Hacker Manifesto,” written by The Mentor (an alias used by Loyd Blankenship) in 1986. “We explore… and you call us criminals. We seek knowledge… and you call us criminals,” Blankenship writes shortly after being arrested himself. Even if I received a page of blacked-out text in the mail, would I be scared and change my habits? What if I awoke to a hammering on my door in the middle of the night? I still don’t know what to make of my response, but maybe I’ll submit another request again next year.

Knock, knock.

 

Running & Using A Finger Daemon

The finger application was written in the 1970s to allow users on a network to retrieve information about other users. Back before Twitter and other micro-blogging platforms, someone could use the finger command to retrieve public contact information, project notes, GPG keys, status reporting, etc. from a user on a local or remote machine.

Finger has mostly faded into obscurity due to many organizations viewing the availability of public contact information as a potential security hole. With great ease, attackers could learn a target’s full name, phone number, department, title, etc. Still, many embraced the reach that finger could provide. Notably, John Carmack of id Software maintained detailed notes outlining his work in game development.

These days, finger is usually found only on legacy systems or for novelty purposes due to much of its functionality being replaced with the more-usable HTTP.
 
Installing finger & fingerd

This guide assumes we are running a Debian-based operating system with a non-root, sudo user. To allow finger requests from other machines, make sure the server has port 79 open and available.

The first thing we will need to do is install the finger client, finger daemon, and inet daemon:

The inet daemon is necessary to provide network access to the finger daemon. inetd will listen for requests from clients on port 79 (designated for finger) and spawn a process to run the finger daemon as needed. The finger daemon itself cannot listen for these connections and must instead rely on inetd to act as the translator between the sockets and standard input/output.

To ensure that we have IPv6 compatibility (as well as maintain IPv4 compatibility), we will edit the inetd.conf configuration file:

sudo nano /etc/inetd.conf

Find the section that is labeled INFO, and comment out the line under it defining the finger service:

#finger    stream    tcp    nowait        nobody    /usr/sbin/tcpd    /usr/sbin/in.fingerd

Now below it we will add two lines that define the service for IPv4 and IPv6 explicitly:

finger    stream    tcp4    nowait        nobody    /usr/sbin/tcpd    /usr/sbin/in.fingerd
finger    stream    tcp6    nowait        nobody    /usr/sbin/tcpd    /usr/sbin/in.fingerd

Then we will restart inetd to run the changes:

sudo /etc/init.d/inetutils-inetd restart

Now we can use the finger command against our machine:

finger @locahost

 
User Configuration

Each user will have some user information displayed such as real name, login, home directory, shell, home phone, office phone, and office room. Many of these fields are probably not set for the current user account, but many of these can easily be updated with new information.

The chfn utility is built specifically to change information that is retrieved by the finger commands. We can run it interactively by invoking it:

chfn

If we run through this once, we may not be able to edit our full name or wipe out the contents of certain fields. Thankfully, chfn takes several flags to modify these fields individually (and with empty strings accepted!):

$ chfn -f "full name"
$ chfn -o "office room number"
$ chfn -p "office phone number"
$ chfn -h "home phone number"

Now that our information is set, we can start creating files that will be served by finger.

The first file will be the .plan file. This is typically used to store updates on projects, but can be used for pretty much anything such as schedules, favorite quotes, or additional contact information.

nano ~/.plan

Next, we can create a .project file. This file is traditionally used to describe a current project, but can house any content provided it displays on a single line.

nano ~/.project

Next, if we have a GPG key, it can also be included via the .gnupg file.

gpg --armor --output ~/.gnupg --export "my name"

Depending on our machine’s configuration, we can also set up mail forwarding which will be shown when our user account is queried via a .forward file.

echo my@other.email.com > ~/.forward

Now that all the files are created, we need to change the permissions on them to allow them to properly be read by finger. This command will allow others to read and execute our new files:

chmod o+rx ~/.plan ~/.project ~/.gnupg ~/.forward

Afterwards, anyone with finger should be able to query the account provided the host is reachable and the port is exposed:

$ finger famicoman@peer0
Login: famicoman                        Name: mike dank
Directory: /home/famicoman              Shell: /bin/bash
Office: #phillymesh, famicoman@gmail    Home Phone: @famicoman
On since Wed Mar  1 18:28 (UTC) on pts/0 from ijk.xyz
   5 seconds idle
No mail.
PGP key:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQINBFhQteQBEADOQdY/JyNsRBcqNlHJ3L7XeMWWqG+vlGYjF5sOsKkWDrgRrAhE
gJthGdPSYLIn5/kRst5PkDGjZHFq1k4VAaUlMslCbLgj0xMzUWmhGRKtFjrnQzFi
UVaW/GcW682b5wKkEbSpwRrLHJ19cwYiQHRA6dahiCkWkdh7MluHKTwU1kaUrs3E
3satrSAlOJHH2bg5mDuQPTov/q6hot2pfq8jseQuwVflssqOt4tx3o0tcwbKJwQs
8qU3cVkfg+gzzogM5iMmKAjhFt9Ta7E2kp+iR8gOkH7CK2tu+WIpdpSYuIOe2nEa
AdYSGJRmIZxqwGPwZu893OiYTiLF2dGEWj4cSmZFJ9BYxw9b7dMePDs3l9T1DyJN
FF6JyqtiTpSOfw4+9oNL/+8kmRFMtQBGDH5Dqn4Bg+EYUUGWh3BQb1UGRGNRbls1
SYw4unPsMAaGd0tqafyEOE7kHcQwGMWyI4eFi0bkBRTCvjyqTqTImdr4+xRKn4rW
vS+O+gSSWnrKW67TF0vFuPKV4w4sdPhldcYZjiPNI1m6nmnih4756LW+W5fCKMyN
RN4yPmaF6awEM3fPf3BWhxDmsBvYLqLlE9b6DQ6DmIsC6x5S/jBFr/W0cgZKuMBR
wIHGIpCTgkVsvjTxTeDnZQzEuaZFOpHrYBYG0+JccE5ZZ2/aEupB1tLWzwARAQAB
tCtNaWtlIERhbmsgKEZhbWljb21hbikgPGZhbWljb21hbkBnbWFpbC5jb20+iQI+
BBMBAgAoBQJYULXkAhsDBQkJZgGABgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAK
CRAWGa5NfPKo9xBPEADGK7ol4nU1cDpYc6XPYb4w4s8G/9Ht3UGZvy4ClB7TvntR
HuixWISQyElK4pDntrpXfLmDgqNQqUtjev6w0uEEc7MQW0WBYPlJ9rmVuDJtPjOP
hr9wUnwrd6KypHGw1Y6qukf/w8gCDZyZ8BVOm3r6XT6VlpuVVBP16ax6Mh7sTf4i
/D8D1JH89zApfplqiiA+OAvunbm19jrczby5ILhevbwfpP+ob7FqevCQi/ppGncg
s2LZceIYPh09OzakBiZkIwzMsuWHYxrenUmg8kuVaVyaGXdCc+KZr7c4oGioLxOL
8p8PunjL+i/uGtvZ3tQEvHjB9r1Ghu1t5MUQC4xTvgLnvLOQA0gmknwRQ6nSWrQf
yDJPnaJcBKlfnCR7eKtotsTobUCWDMC9sEvLjLYMgxtEbu46/J57oDQ1LniejI9X
rTUN8cnRLpOQUy0eTBTtUWYqdqO9fAjvMIsnWR4IcMlTJazghygQ+zENvGlEfey2
NL4nC2Yus3EAeCEJC52vccSsf3b7HT/4GemNOVjh72kZ1FM6HL+UNaU7JLppNQk6
mFKC3/wIKxKBjm/vR21Efl+f2iwUAiLjrugaY6g4BXPX3p6iCHftEW7gA6s4UC0A
1HYScq9Duxv+AQpe/mfVA2SBrD3OLTknW2Z5EZSTHqyzRQtHviXTRkdtQscF7rkC
DQRYULXkARAAugwI5dpLpIYI4KZcHsEwyYqUL35ByGCuqklYGOSMkkX0/WFH2ugv
Vs8fqgrn/koXjKBPpxdElfTAGbD2MSTvAUzwMgvMaLZUxY2Qh1RipkNXvSAO+W6W
nJKyBvasboK8l61yta/RrPvUr+equxtBawD3ja9DPzfTYuVCewR6ztcdvqAho5D1
Ds5HxO6aLPsMW8Fj8kf+Ae5eypuNBAN0ivpkDfQyaungh2EjrVpJzWJ8ZqYah/cK
1R55rPhq/JtjcO8g2nnsi+L5EuMma9+50lVPHlHjO9Y0xaQMq2/rLJcDu8UbymX7
LSoxzixiLYtig3GAB+1XkIqMwaEkFF1zTlAZ7drAfhH1AJ0L7SQBur5J1EM3iRYO
XmelyxJPwKgL4K6U4NnnrVpf71GyBktXyuEiOWDFjINs2OzwL4z/l1oWuJuFW/vO
C/yM5Ed7yzLXm0exAnW8Y0u1hwbhRy31zIYbLeB+PuiAqrnvSr1xhAWalBM3dqF+
VIsuKlcoFOmX74/OCEFQ/cqrrkhQ/PrkZqQZCyjyUhkuIqasMZGhLakAIO226E1r
IsUX0jMliJ/A89ZyDmoZyyvRjyydCkOS5HXahFLCufPC8FgcAL6VcpVF2sHBmWcj
vOlfr7OXHzKvKVNy8qymzyfRCPHoYQvxW9UAhru/iIqKOTIDo3OwJ9EAEQEAAYkC
JQQYAQIADwUCWFC15AIbDAUJCWYBgAAKCRAWGa5NfPKo91+XD/0a+gcsXpKo2gy+
oQC/osyJhesx2CGzZqmSB3fpyq9D+jnsCzt/Bh/EtR+2sUWxokIVc5dLzyr0icNl
c0iJBO6It662Q9FnNemiGgv2PLYbjjDC/CP82QWoWrSzPpDKu0DgrF6+MPQgRleT
Z8g0+nLHYMgCTAPfwaaYHLvLLaOt0Ju7L2kt9TSKn2aU2NJReVC5mm3Jxg6Bz3Ae
iQ6iigKI7R/huVDVzBuJQNiToRQswbb/PidYdwyiI3GJC6m+q8HEWAl+cGahqUVg
IIDlmj6SeIsP0r+SGygX0PRWyW/NmQPWamG5e4TDL0pZpu1CzGqfN/A2KLXVO6ss
X2l2HWadBQgd0goFNX7PK210I5B8SfiJM5+cLgChcT9g3mKil8XUkKTSE+c0Q9e5
1AkUrUS69KZeqqtJOB350YP9ZGY7TbOXjXp0Y0/fo3/X+0sZzmP8jIyozLAecM0e
fPrIeA2mego6nWaSRp5FH8KlvHpUvFcKNV+SVSbrbzmSVSpgKmQRp2kd+tyDOPrt
2tXIoWMYdVtlluSYqR2lPv7WFyxCPX8DmxK6fYeVoqBf+7g4GZPdcSviDBlJEJfc
HAZZKsSZzgMugwqLidQ+W53eIDyIOVw0tvcHDJ1S5mpWqvROf7gfNXXFvLjECACN
wnqdjeFGPLlP5Q6tVPvp8j7prVlvZQ==
=xm3N
-----END PGP PUBLIC KEY BLOCK-----
Project:
Philly Mesh - http://mesh.philly2600.net - #phillymesh:tomesh.net
Plan:
%=============================================%
==2017-01-26===================================
%=============================================%
+ Installed fingerd

* Configuring SILC network
* Documentation for fingerd and silcd

By default, finger can display login and contact information for all accounts on a machine. Luckily, accounts can be individually configured so that finger will ignore their existence if there is a .nofinger file in their home directories:

sudo touch /home/someotheraccount/.nofinger && chmod o+rx /home/someotheraccount/.nofinger

 
Conclusion

You should now have finger and fingerd installed and configured on your server for each user to make use of. Keep in mind that the information you enter here will be public (provided the server is) and people around the world may be able to gleam you contact information or even last login time via the finger command.
 
Sources

 

The Best of 2016

See the 2015 post here!

Here is my second installment of the best things I’ve found, learned, read, etc. These things are listed in no particular order, and may not necessarily be new.

This annual “Best Of” series is inspired by @fogus and his blog, Send More Paramedics.

Favorite Blog Posts Read

Articles I’ve Written for Other Publications

I’ve continued to write for a few different outlets, and still find it a lot of fun. Here is the total list for 2016.

Favorite Technical Books Read

I haven’t read as much this year as previously

  • The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary – Really cool book about early community software development practices (at least that’s what I got out of it). Also covers some interesting history on the start of time-sharing systems and move to open-source platforms.
  • Computer Lib – An absolute classic, the original how-to book for a new computer user, written by Ted Nelson. I managed to track down a copy for a *reasonable* price and read the Computer Lib portion. Still need to get through Dream Machines.

Favorite Non-Technical Books Read

Number of Books Read

5.5

Favorite Music Discovered

Favorite Television Shows

Black Mirror (2011), Game of Thrones (2011) , Westworld (2016)

Programming Languages Used for Work/Personal

Java, JavaScript, Python, Perl, Objective-C.

Programming Languages I Want To Use Next Year

  • Common Lisp – A “generalized” Lisp dialect.
  • Clojure – A Lisp dialect that runs on the Java Virtual Machine
  • Go – Really interested to see how this scales with concurrent network programming.
  • Crystal – Speedy like go, pretty syntax.

Still Need to Read

Dream Machines, Literary Machines, Design Patterns, 10 PRINT CHR$(205.5+RND(1)); : GOTO 10

Life Events of 2016

  • Got married.
  • Became a homeowner.

Life Changing Technologies Discovered

  • Amazon Echo – Not revolutionary, but has a lot of potential to change the way people interact with computers more so than Siri or Google Now. The fact that I can keep this appliance around and work with it hands free gives me a taste of how we may interact with the majority of our devices within the next decade.
  • IPFS – A distributed peer-to-peer hypermedia protocol. May one day replace torrents, but for now it is fun to play with.
  • Matrix – A distributed communication platform, works really well as an IRC bridge or replacement. Really interested to see where it will go. Anyone can set up a federated homeserver and join the network.

Favorite Subreddits

/r/cyberpunk, /r/sysadmin, /r/darknetplan

Completed in 2016

Plans for 2017

  • Write for stuff I’ve written for already (NODE, Lunchmeat, Exolymph, 2600)
  • Write for new stuff (Neon Dystopia, Active Wirehead, ???, [your project here])
  • Set up a public OpenNIC tier 2 server.
  • Participate in more public server projects (ntp pool, dn42, etc.)
  • Continue work for Philly Mesh.
  • Do some FPGA projects to get more in-depth with hardware.
  • Organization, organization, organization!
  • Documentation.
  • Reboot Raunchy Taco IRC.

See you in 2017!

 

Building DIY Community Mesh Networks (2600 Article)

Now that the article has been printed in 2600 magazine, Volume 33, Issue 3 (2016-10-10), I’m able to republish it on the web. The article below is my submission to 2600 with some slight formatting changes for hyperlinks.

Building DIY Community Mesh Networks
By Mike Dank
Famicoman@gmail.com

Today, we are faced with issues regarding our access to the Internet, as well as our freedoms on it. As governmental bodies fight to gain more control and influence over the flow of our information, some choose to look for alternatives to the traditional Internet and build their own networks as they see fit. These community networks can pop up in dense urban areas, remote locations with limited Internet access, and everywhere in between.

Whether you are politically fueled by issues of net neutrality, privacy, and censorship, fed up with an oligarchy of Internet service providers, or just like tinkering with hardware, a wireless mesh network (or “meshnet”) can be an invaluable project to work on. Numerous groups and organizations have popped up all over the world, creating robust mesh networks and refining the technologies that make them possible. While the overall task of building a wireless mesh network for your community may seem daunting, it is easy to get started and scale up as needed.

What Are Mesh Networks?

Think about your existing home network. Most people have a centralized router with several devices hooked up to it. Each device communicates directly with the central router and relies on it to relay traffic to and from other devices. This is called a hub/spoke topology, and you’ll notice that it has a single point of failure. With a mesh topology, many different routers (referred to as nodes) relay traffic to one another on the path to the target machine. Nodes in this network can be set up ad-hoc; if one node goes down, traffic can easily be rerouted to another node. If new nodes come online, they can be seamlessly integrated into the network. In the wireless space, distant users can be connected together with the help of directional antennas and share network access. As more nodes join a network, service only improves as various gaps are filled in and connections are made more redundant. Ultimately, a network is created that is both decentralized and distributed. There is no single point of failure, making it difficult to shut down.

When creating mesh networks, we are mostly concerned with how devices are routing to and linking with one another. This means that most services you are used to running like HTTP or IRC daemons should be able to operate without a hitch. Additionally, you are presented with the choice of whether or not to create a darknet (completely separated from the Internet) or host exit nodes to allow your traffic out of the mesh.

Existing Community Mesh Networking Projects

One of the most well-known grassroots community mesh networks is Freifunk, based out of Germany, encompassing over 150 local communities with over 25,000 access points. Guifi.net based in Spain, boasts over 27,000 nodes spanning over 36,000 km. In North America we see projects like Hyperboria which connect smaller mesh networking communities together such as Seattle Meshnet, NYC Mesh, and Toronto Mesh. We also see standalone projects like PittMesh in Pittsburgh, WasabiNet in St. Louis, and People’s Open Network in Oakland, California.

While each of these mesh networks may run different software and have a different base of users, they all serve an important purpose within their communities. Additionally, many of these networks consistently give back to the greater mesh networking community and choose to share information about their hardware configurations, software stacks, and infrastructure. This only benefits those who want to start their own networks or improve existing ones.

Picking Your Hardware & OS

When I was first starting out with Philly Mesh, I was faced with the issue of acquiring hardware on a shoestring budget. Many will tell you that the best hardware is low-power computers with dedicated wireless cards. This however can incur a cost of several hundred dollars per node. Alternatively, many groups make use of SOHO routers purchased off-the-shelf, flashed with custom firmware. The most popular firmware used here is OpenWRT, an open source alternative that supports a large majority of consumer routers. If you have a relatively modern router in your house, there is a good chance it is already supported (if you are buying specifically for meshing, consider consulting OpenWRT’s wiki for compatibility. Based on Linux, OpenWRT really shines with its packaging system, allowing you to easily install and configure packages of networking software across several routers regardless of most hardware differences between nodes. With only a few commands, you can have mesh packages installed and ready for production.

Other groups are turning towards credit-card-sized computers like the BeagleBone Black and Raspberry Pi, using multiple USB WiFi dongles to perform over-the-air communication. Here, we have many more options for an operating system as many prefer to use a flavor of Linux or BSD, though most of these platforms also have OpenWRT support.

There are no specific wrong answers here when choosing your hardware. Some platforms may be better suited to different scenarios. For the sake of getting started, spec’ing out some inexpensive routers (aim for something with at least two radios, 8MB of flash) or repurposing some Raspberry Pis is perfectly adequate and will help you learn the fundamental concepts of mesh networking as well develop a working prototype that can be upgraded or expanded as needed (hooray for portable configurations). Make sure you consider options like indoor vs outdoor use, 2.4 GHz vs. 5 GHz band, etc.

Meshing Software

You have OpenWRT or another operating system installed, but how can you mesh your router with others wirelessly? Now, you have to pick out some software that will allow you to facilitate a mesh network. The first packages that you need to look at are for what is called the data link layer of the OSI model of computer networking (or OSI layer 2). Software here establishes the protocol that controls how your packets get transferred from node A to node B. Common software in this space is batman-adv (not to be confused with the layer 3 B.A.T.M.A.N. daemon), and open80211s, which are available for most operating systems. Each of these pieces of software have their own strengths and weaknesses; it might be best to install each package on a pair of routers and see which one works best for you. There is currently a lot of praise for batman-adv as it has been integrated into the mainline Linux tree and was developed by Freifunk to use within their own mesh network.

Revisiting the OSI model again, you will also need some software to work at the network layer (OSI layer 3). This will control your IP routing, allowing for each node to compute where to send traffic next on its forwarding path to the final destination on the network. There are many software packages here such as OLSR (Optimized Link State Routing), B.A.T.M.A.N (Better Approach To Mobile Adhoc Networking), Babel, BMX6, and CJDNS (Caleb James Delisle’s Networking Suite). Each of these addresses the task in its own way, making use of a proactive, reactive, or hybrid approach to determine routing. B.A.T.M.A.N. and OLSR are popular here, both developed by Freifunk. Though B.A.T.M.A.N. was designed as a replacement for OLSR, each is actively used and OLSR is highly utilized in the Commotion mesh networking firmware (a router firmware based off of OpenWRT).

For my needs, I settled on CJDNS which boasts IPv6 addressing, secure communications, and some flexibility in auto-peering with local nodes. Additionally, CJDNS is agnostic to how its host connects to peers. It will work whether you want to connect to another access point over batman-adv, or even tunnel over the existing Internet (similar to Tor or a VPN)! This is useful for mesh networks starting out that may have nodes too distant to connect wirelessly until more nodes are set up in-between. This gives you a chance to lay infrastructure sooner rather than later, and simply swap-out for wireless linking when possible. You also get the interesting ability to link multiple meshnets together that may not be geographically close.

Putting It Together

At this point, you should have at least one node (though you will probably want two for testing) running the software stack that you have settled on. With wireless communications, you can generally say that the higher you place the antenna, the better. Many community mesh groups try to establish nodes on top of buildings with roof access, making use of both directional antennas (to connect to distant nodes within the line of sight) as well as omnidirectional antennas to connect to nearby nodes and/or peers. By arranging several distant nodes to connect to one another via line of sight, you can establish a networking backbone for your meshnet that other nodes in the city can easily connect to and branch off of.

Gathering Interest

Mesh networks can only grow so much when you are working by yourself. At some point, you are going to need help finding homes for more nodes and expanding the network. You can easily start with friends and family – see if they are willing to host a node (they probably wouldn’t even notice it after a while). Otherwise, you will want to meet with like-minded people who can help configure hardware and software, or plan out the infrastructure. You can start small online by setting up a website with a mission statement and making a post or two on Reddit (/r/darknetplan in particular) or Twitter. Do you have hackerspaces in your area? Linux or amateur radio groups? A 2600 meeting you frequent? All of these are great resources to meet people face-to-face and grow your network one node at a time.

Conclusion

Starting a mesh network is easier than many think, and is an incredible way to learn about networking, Linux, micro platforms, embedded systems, and wireless communication. With only a few off-the-shelf devices, one can get their own working network set up and scale it to accommodate more users. Community-run mesh networks not only aid in helping those fed up with or persecuted by traditional network providers, but also those who want to construct, experiment, and tinker. With mesh networks, we can build our own future of communication and free the network for everyone.

 

Site/Project Updates

You may have noticed that some of my sites are now sporting forced https and ipv6 support. Here’s a little rundown of upgrades and updates.

  • famicoman.com – Forced https and ipv6, software updated. Fixed some broken static sites I’ve had available. ChannelEM, Techtat, and other old projects are available through their own subdomains and indexed on this page, https://static.famicoman.com/
  • noobelodeon.org, elcycle.org – Forced https and ipv6. All subdomains have the same treatment.
  • anarchivism.org – Forced https and ipv6, software updated. Now has a static subdomain for sites I’ve mirrored, https://static.anarchivism.org/
  • raunchytaco.com – Forced https and ipv6. Temporarily disabled the quote database as it is not compatible with the latest PHP. I am looking into Chirpy as an alternative.
  • obsoleet.com – After being down for a while, I’ve restored the site. Forced https and ipv6, software updated.

 

I’m currently splitting my time between writing, doing a little for mesh.philly2600.net, server migrations, and rebuilding Raunchy Taco. Let me know if anything is broken!

 

 

I’m in 2600 Magazine

As of the Autumn 2016 issue, I now have an article appearing in 2600: The Hacker Quarterly! My article is titled “Building DIY Community Mesh Networks,” and covers topics in building and organizing local mesh networks.

33-3_cover_large

The issue can be purchased in Barnes & Noble stores, as well as physically or digitally through the 2600 site and Amazon.com. I will shortly be making the article available online as well.

 

The Evolution of Digital Nomadics

This article was originally written for and published at N-O-D-E on October 18th, 2016. It has been posted here for safe keeping.

THE EVOLUTION OF DIGITAL NOMADICS

In the Autumn of 1983, Steven K. Roberts pedaled off on a recumbent bicycle and pioneered a new revolution in the way people worked.

exhoofn

Stuck in the drudgery of suburban Ohio, Steve was bored. He had many possessions, a house, and work as a technology consultant and freelance writer. Steve desired adventure and felt like taking a risk, so he sold off all of his possessions, put his house on the market, cut ties with friends and family, and gave up his steady employment. He sacrificed the security he had built up over the years and invested in a custom bicycle, the “Winnebiko” which he would ride 10,000 miles across the U.S. for the next 18 months. “My world was no longer limited by the constraints of time and distance—or even responsibility. The thought was both delicious and unsettling, and I suddenly realized, alone in this unfamiliar city, that I was as close to ‘home’ as I would be for a long time,” Steve wrote in a book about his travels, Computing Across America, published in 1988.

The Winnebiko was not your ordinary bicycle. Apart from the custom frame and hand-picked parts, Steve outfitted his rig with solar panels, lights, radios, a security system, and most importantly a TRS-80 portable computer. Traveling the country from couch to hostel and everywhere in-between, Steve continued to work as a freelance writer, documenting his adventures. Jacking into borrowed phone lines for Internet access in the late night or writing from the comfort of an abandoned chair on the side of a snowy mountain, Steve was working in a way that was unconventional for the time.

Steve coined a term for himself, the “technomad,” combining the concepts of high-technology with traditional nomadics (the latter possibly being influenced in-part by nomadics as they were presented in Stewart Brand’s Whole Earth Catalog, a counter-culture publication promoting self-sufficiency and the do-it-yourself attitude in 1968). Later, Steve would construct more complex and technologically-enhanced bicycles for future long-term journeys.

The concept of “telecommuting” was not new in 1983, as the term had been created a decade earlier by Jack Niles, former NASA engineer, to describe remote work done via dumb terminal. By the 1990’s, after Steve’s original adventure, telecommuting had taken the world by storm and continued to grow. By the early 2010’s, almost half of the U.S. population reported to be working remotely at least part time. Remote work was starting to go mainstream.

But then there are people like Steve. What became of this movement to leave it all behind and work from the open road? By the late 1990’s we saw the use of the phrase “Digital Nomad” in the Makimoto and Manners book of the same name to explore the concept of digital nomadics and determine its sustainability. The infrastructure to support the lifestyle was improving as well. We saw the inclusion of WiFi technology in laptop computers and the rise of payment systems such as PayPal to support a generation of online-only workers on-the-move.

As time progressed, we only saw more of the tech-savvy convert to the rambling lifestyle, with bolder individuals traveling all over the world, settling down for days, weeks, or months at a time before picking up and starting all over. Today, more companies are providing this opportunity to their employees, with some outfits never actually meeting their workers face-to-face. Employees enjoy the flexibility while employers enjoy cherry-picking applicants from a larger pool and reduced overhead costs previously spent in office space. Various communities have popped up such as /r/digitalnomad (https://www.reddit.com/r/digitalnomad) and /r/vandwellers (https://www.reddit.com/r/vandwellers) to offer support for the grizzled vagabonds and tips to the bright-eyed newcomers. Here, you may find advice for what to carry, how to travel on a shoe-string budget, and lists of companies that are nomad-friendly.

In popular culture, we see the idea of the digital nomad becoming more prevalent. For example, Ernest Cline’s 2011 novel Ready Player One features the character Aech who lives in and works out of a recreational vehicle. As the future comes into view, we can only expect more people to work remotely and live simply, embracing the freedom of change and fighting to avoid complacency. The technology is only becoming more accommodating as equipment becomes smaller, faster, and reliably connected in even the most rugged of situations. We not only see a rise in letting employees work where they want, but also when they want. Now that a network connection can exist within a jacket pocket, we are on the verge of the 24/7 worker, always on call. When your office isn’t anywhere, it’s everywhere. Some day soon, we may see digital nomads living in self-driving vehicles that methodically navigate the city limits while the occupant eats, sleeps, and works. Similar to Don DeLillo’s Cosmopolis, wherein the protagonist spends most of his day conducting business out of his moving soundproof, bulletproof limousine—a rolling fortress filled with computers and television screens—we may see this concept coming to fruition without the human behind the wheel.

As for Steve, he is still living the technomadic life, but is more drawn to the offerings of the water as opposed to the open road. “I’m now immersed in nautical projects, as well as building some substrate-independant technomadic tools,” Steve writes to me after I purchased a handful of issues of The Journal of High-Tech Nomadness, Steve’s own long out-of-print paper periodical.

Whether you do most of your work in an office or a coffee shop, you cannot deny that things are changing for the modern employee as they become more entwined with technology. “I’m riding a multi-megabyte Winnebiko with dozens of communications options, and more wonders lie just ahead,” Steve writes after upgrading his bicycle for his second journey. “[I]t is no longer very difficult to be a deeply involved, productive citizen of the world while wandering endlessly. Because once you move to Dataspace, you can put your body just about anywhere you like.”

––
BY MIKE DANK