[WANTED] Language Technology / Electric Word Magazine

Language Technology / Electric Word was a technology magazine running from 1987 to 1990, edied by Louis Rossetto who later went on to start Wired Magazine.

Unfortunately, I can’t find any issues of these publications, and little is available online beyond the Wikipedia page which states:

Electric Word was a bimonthly, English-language magazine published in Amsterdam between 1987 and 1990 that offered eclectic reporting on the translation industry, linguistic technology, and computer culture. Its editor was Louis Rossetto.

The magazine was launched under the title Language Technology by a translation company in Amsterdam, INK International. It was later renamed Electric Word and sold to a small Dutch media company. The magazine was terminated in 1990 due to insufficient revenues.

Electric Word was one of the first magazines published using desktop publishing software. It featured avant-garde graphics by the Dutch graphic designer Max Kisman.

After the failure of Electric Word, Rossetto and his partner Jane Metcalfe moved to San Francisco, California and established Wired Magazine.

Luckily, there is a defunct website located at http://rynne.org/electricword. Though the website is dead, we can see some cached information with the help of the Wayback Machine. Looking at this cached version of the site, https://web.archive.org/web/20100308041827/http://www.rynne.org/electricword, we can see some information about the publication and also note that some issues were released for PDF download. These are: #3, #5, #7, #20.

The PDFs were originally hosted at:

http://rynne.org/electricword/pdfs/ltew3.pdf
http://rynne.org/electricword/pdfs/ltew5.pdf
http://rynne.org/electricword/pdfs/ltew7.pdf
http://rynne.org/electricword/pdfs/ltew20.pdf

Now long gone, I can find no trace of these PDFs or even any issues online for sale.

Any help or information about locating any issues would be extremely helpful! We’re looking at a lost prototype for Wired magazine.

 

[WANTED] Chromed Pork Radio

Recently, I’ve been on the hunt for cyberpunk podcasts. Between the sci-fi dramas and current news shows, I found a surprising amount of references to Chromed Pork, an interesting podcast by a group of phone phreaks and hackers that ran for 22 episodes from early 2008 to early 2009.

Chromed Pork seems to have started out as a group of friends on IRC. They came together (originally or later I don’t know) on Binary Revolution, a hacking website which previously ran the popular Binary Revolution Radio show and published its own zine. The radio show has since been merged into Hacker Public Radio, though BinRev is kept alive through forums and IRC. I have been a member of the BinRev forums for 10 years now and missed this show when it first premiered. In 2008, the podcast scene was more immature but still well established. There was an explosion of content and it proved hard to keep up.

Chromed Pork Radio Logo

Chromed Pork Radio Logo

The BinRev forums do have archival posts, and I can find accounts for three hosts of Chromed Pork Radio: Multi-Mode, tacomaster, and Inode. Communication with them seems difficult. The most recent login date for any of these three accounts is 2010, and tacomaster’s email address on his profile returns as unreachable if you try to push to it.

I did a little more digging on Chromed Pork’s old blogspot site which contains old number scans and podcast show notes. I found links to episodes that have since died, apparently hosted on a “mobile-node.net” domain. This domain now points to yet another domain, but I’m not sure of that domain owner’s involvement if any. I’ve reached out to him and still hope to hear back one way or the other, but no word yet. I also found an old Chromed Pork general email address, but this too is deactivated.

Later, I reached out to /u/r3dk1ng on Reddit whom I saw posting about Chromed Pork, and he was able to get me a good amount (15/22) of episodes which I have since put on the Internet Archive here.

For reference, here is a list of the files I am still looking for:

ChromedPork-0012-Assorted_Bullsh-t.mp3
ChromedPork-0013-Phreaking.mp3
ChromedPork-0014-Guest-Wesley_Mcgrew.mp3
ChromedPork-0015-Newscasting.mp3
ChromedPork-0018-Porktopia_Election-Night.mp3
ChromedPork-0019-MC-Colo_and_other_news.mp3
ChromedPork-0020-Killing_Time.mp3

And here is a description of the podcast from the defunct radio.chromedpork.net site:

Chromed Pork Radio is an open information security “podcast”, featuring a variety of security related topics, such Info and Comms Sec, Telephony, Programming, Electronics and Amateur Radio. We do our best to work on an open contribution model, meaning any listener is a potential host. We do not censor our shows but do ask that contributors keep all contributions purely informational or hypothetical. Contributions should consist only of material the contributor is legally entitled to share.
Given our open uncensored model, the views and opinions expressed in this “podcast” are strictly those of the contributor. The contents of this “podcast” are not reviewed or approved by Chromed Pork Media.
If you would like to contribute, or provided feedback please visit our contribute section for details.

My trail has gone cold, and I’m still on the lookout for the remaining episodes or anyone who may have them.

 

(Re)Hacking a Boxee Box

I recently purchased an Amazon Fire TV Stick and love that it allows the ability to sideload applications like Kodi (I still hate that name, long live XBMC!) for media streaming. I mainly use Samba/SMB shares on my network for my media, with most of my content living on an old WDTV Live Hub. The WDTV Hub works great and is still pretty stable after all of these years (except for a few built-in apps like YouTube, I wish they kept going with updates), and the Fire TV will gladly chug away, playing any video over the network. However, I had the need to have my media stream to a third television and I didn’t want to uproot an existing device and carry it from room to room.

So I needed a third device. I already have a second generation Roku kicking around, but it doesn’t appear to be able to run anything other than the stock software at this time. I also considered a Raspberry Pi and wifi dongle, but this puts the price up to around $50 (which is more than the Fire TV Stick. I do want something cheap). I looked for a less expensive option with older media streamers and found a lot of information about the Boxee Box appliance put out by D-Link in 2008, discontinued in 2011. I first encountered this box in around 2012 when I was tasked to do some reverse engineering on it, but that’s another story. In the time since, a Google TV hacking team figured out they could do simple shell command injection when setting the Box’s host name, which eventually evolved into a group developing Boxee+Hacks, a replacement operating system. Since Boxee+Hacks, other developers have been working on a port of Kodi which you can install onto the Boxee to give you more options and better compatibility over the operating system’s built in features.

After some eBaying, I was able to get a Boxee for around $15, shipping included (Make sure you get model DSM-380!). The item description said that the box already had Boxee+Hacks installed and upgraded to the latest version, so I figured I was on my way to a quick installation of Kodi and could get up and running in minutes.

When I first booted the Boxee and checked out the Boxee+Hacks settings, I noticed that the device only had version 1.4 installed while the latest available was 1.6. The built-in updater did not work anymore, so the box never reported that there was an available Boxee+Hacks update.Navigating the Boxee+Hacks forums was a little cumbersome, but I eventually found the steps I needed to get updated and launch Kodi. I’ve outlined them below to help any other lost travelers out there.

First, though, go through your Boxee settings and clear any thumbnail caches, local file databases, etc. We need all the free space we can get and there will be installation errors if you don’t have enough free space. The installation script we will run later automatically clears the device’s temp directory, but doesn’t remove these cached files.

On the Boxee, go to Settings –> Network –> Servers and enable Windows file sharing.

If you already have Boxee+Hacks, connect the box and your computer to your home network and check the IP address for the box on either the Boxee’s settings page or by checking for a new device on your router’s console.

To make things really easy, telnet to your Boxee on port 2323 using your box’s IP address (Mine is 192.168.1.100).

 telnet 192.168.1.100 2323

Once there, we need to download and run the installer script.

curl -L http://tinyurl.com/boxeehacks | sh

If you DO NOT have Boxee+Hacks installed already, never fear. On the same Settings –> Network –> Servers page on your Boxee, locate the Hostname filed and enter the following into it.

boxeebox;sh -c 'curl -L tinyurl.com/boxeehacks | sh'

Then, navigate away from the Settings page.

After executing the command through telnet, or through the Boxee settings page, the logo should glow red on the front of the box and you should receive on-screen instructions to perform an installation.

Boxee+Hacks installation screen, from http://boxeed.in/forums/viewtopic.php?f=5&t=1216

Boxee+Hacks installation screen, from boxeed.in forums.

The installation guide works pretty well. Here, you will be prompted to install Kodi in addition to Boxee+Hacks. At this point I chose NOT to install Kodi. From what I read, once you install it though the script, it can be difficult to remove, and I didn’t want to deal with the possibilities of a difficult upgrade.

Instead, I decided to install Kodi on a flash drive. I’ve had a cheap 512MB drive that has been kicking around for close to ten years, and it is perfect for fitting Kodi. To setup the flash drive, I formatted it as FAT32 and labeled the drive as MEDIA. I’m not sure if either of these matter, but this configuration worked for me. I downloaded the latest Kodi release built for Boxee from the boxeebox-xbmc repository (Version KODI_14.2-Git-2015-10-20-880982d-hybrid at the time of this writing) and unzipped it onto my flash drive. Make sure that the all of the Kodi files are in the root directory of the drive, and not within the KODI_14.2-Git-2015-10-20-880982d-hybrid directory you get from extracting the archive.

It might also help to label the drive

It might also help to label the drive

That’s all there is to it, just plug the flash drive into the back of the Boxee and it is good to go. If you leave the flash drive in, whenever you boot the Boxee it will go right into Kodi. Leave it out and it will boot to standard Boxee+Hacks. If you boot into Boxee+Hacks and then want to load up Kodi, just plug in the flash drive and it loads automatically.

This turns a seemingly unassuming and thought-obsolete device into a pretty powerful media center, and is a quick inexpensive solution to streaming your content to yet another television.

 

rtmbot-archivebotjr – A Slack Bot for Archiving

I’ve been working with the idea of trying to archive more things when I’m on the go. Sometimes I find myself with odd pockets of time like 10 minutes on a train platform or a few minutes leftover at lunch that I tend to spend browsing online. Inevitably, I find something I want to download later and tuck the link away, usually forgetting all about it.

Recently, I’ve been using Slack for some team collaboration projects (Slack is sort of like IRC in a nice pretty package, integrating with helpful online services) and was wondering how I could leverage it for some on-the-go archiving needs.

Slack has released their own bot, python-rtmbot on GitHub that you can run on your own server and pull into your Slack site to do bot things. The bot includes a few sample plugins (written in Python), but I went about creating my own to get some remote archiving features and scratch my itch.

The fruit of my labor also lives on GitHub as rtmbot-archivebotjr. This is not to be confused with Archive Team’s ArchiveBot (I just stink at unique names). archivebotjr will sit in your Slack channels waiting for you to give it a command. The most useful are likely !youtube-dl (for downloading youtube videos in the highest quality), !wget (for downloading things through wget. Great when I find a disk image and don’t want to download it on my phone), and !torsocks-wget (Like !wget but over TOR). I have a few more in there for diagnostics (!ping and !uptime), but you can see a whole list on the GitHub page.

Screenshot_2016-02-25-09-55-50

Right now, the bot is basic and lacks a wide array of features. The possibilities for other tools that can link into this are endless, and I hope to link more in periodically. Either way, you can easily download all sorts of files relatively easily and the bot seems reasonably stable for an initial release.

If you can fit this bot into your archiving workflow, try it out and let me know how it goes. Can it better fit your needs? Is something broken? Do you want to add a feature?

I want to hear about it!

 

The Best of 2015

As a nod to @fogus and his blog, Send More Paramedics, I’ve opted to start the annual tradition of recapping the year with the best things I’ve found, learned, read, etc.

These things are listed in no particular order, and may not necessarily be new.

Favorite Blog Posts Read

Not a lot here that I can recall, but this handful stood out as good reads. Some of them I plan to refer back to in the future.

Articles I’ve Written for Other Publications

I’ve tried something different this past year and have worked to write more for others than for just myself. This has been really fun, but has reduced the total number of entries I have written this year in general. I hope to find some more outlets to contribute to with like-minded interests. I like working with small teams like this instead of bouncing ideas around with only myself.

  • Finding Forgotten Footage – An article I did for Lunchmeat Midnight Snack #4 (a print zine) about finding strange VHS tapes with home-recorded footage.
  • Automating Site Backups with Amazon S3 and PHP – An article I did for the now-defunct TechOats website (still sad about that one). As the title describes, I automated backups of my websites using Amazon S3 and a simple PHP script.
  • The New Wild West – An article for NODE about how the internet of things and the sort of always-connected culture opens things up again for a wide variety of attacks. I draw parallels to the 1980’s boom of hacker culture where a lot of stuff was just left wide open.
  • How to Run your Own Independent DNS with Custom TLDs – A tutorial I did for NODE after remembering the failure of the .p2p project and the success of OpenNIC.

Favorite Technical Books Read

I’ve been trying to read a lot more this year to cut through my growing pile of books. I’ve mainly focused on technical books, including books I’ve only been made aware of in 2015 as well as ones that have been on my shelf for years.

  • Garage Virtual Reality – An antiquated virtual reality book from the ’90s touches on a lot of interesting technology from the time, including homemade projects and technological dead ends. The perfect amount of technical instruction and cyberpunk ideas.
  • Hacking the Xbox: An Introduction to Reverse Engineering – An amazing book on reverse engineering. I picked this up around a decade ago, and it was completely over my head. At the time I dismissed it because it was already outdated with the popularity of “softmods” for the Xbox, but picking it up again it is really just a good general book on getting into reverse engineering and the focus on the Xbox is a fun nostalgic little bonus.
  • Cybernetics – A dated and likely obscure text, this book deals with the early ideas of cybernetics and expands into theory on artificial intelligence and neural networks.

Favorite Non-Technical Books Read

  • Microserfs – A fun book that follows a group of ’90s Microsoft employees as they start their own company.
  • Crypto – An incredible look into the world of cryptography, following all of the pioneers and the cypherpunk movement.
  • Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age – My favorite book of the year, a wonderfully- detailed look into the rise and fall of Xerox PARC and all of the completely fascinating things they invented.
  • The World Atlas of Coffee: From Beans to Brewing – I love coffee and this book lets you learn about all the varieties, proper brewing techniques, etc.
  • Ready Player One – A fun dystopic sci-fi book about a civilization obsesses with a treasure hunt and ’80s culture.

 

Number of Books Read

12

Favorite Musicians Discovered

  • King Tuff
  • Elle King
  • FFS – Franz Ferdinand and Sparks
  • Devo – Everyone knows “Whip It,” but I’ve been focusing on their first few albums.

Favorite Television Shows

Mr. Robot (2015), The X-Files (1993)

Programming Languages Used for Work/Personal

C, C++, Java, JavaScript, Objective-C, Python.

Programming Languages I Want To Use Next Year

  • Common Lisp – A “generalized” Lisp dialect.
  • Clojure – A Lisp dialect that runs on the Java Virtual Machine
  • Go – Really interested to see how this scales with concurrent network programming.

Still Need to Read

Computer Lib, Literary Machines, Design Patterns, 10 PRINT CHR$(205.5+RND(1)); : GOTO 10

Life Events of 2015

I became engaged to be married.

Life Changing Technologies Discovered

  • Amazon Dash Button – I hacked a $5 button to email me when I press it.
  • Ethereum – An interesting decentralized software platform. Still not entirely sure what to make of it.
  • Microsoft Hololens – I want one after seeing this video. I’ve already supported Oculus for VR, but this is winning me over for AR.

Favorite Subreddits

/r/homelab, /r/retrobattlestations, /r/cyberpunk, /r/homeautomation.

Plans for 2016

  • Get married.
  • Write more for NODE (if possible!), Lunchmeat, or other publicans I find out about.
  • Write an article for 2600.
  • Find my missing Leatherman.
  • Release a mobile app.
  • Do some FPGA projects to get more in-depth with hardware.
  • Continue to flesh out Anarchivism with videos/print.
  • Organization, organization, organization!

 

See you in 2016!

 

How to Run your Own Independent DNS with Custom TLDs

This article was originally written for and published at N-O-D-E on September 9th, 2015. It has been posted here for safe keeping.

HOW TO RUN YOUR OWN INDEPENDENT DNS WITH CUSTOM TLDS

BACKGROUND

After reading what feels like yet another article about a BitTorrent tracker losing its domain name, I started to think about how trackers could have an easier time keeping a stable domain if they didn’t have to register their domain through conventional methods Among their many roles, The Internet Corporation for Assigned Names and Numbers (ICANN), controls domain names on the Internet and are well known for the work with the Domain Name System (DNS) specifically the operation of root name servers and governance over top level domains (TLDs).

If you ever register a domain name, you pick a name you like and head over to an ICANN-approved registrar. Let’s say I want my domain to be “n-o-d-e.net”. I see if I can get a domain with “n-o-d-e” affixed to the TLD “.net” and after I register it, I’m presented with an easy-to-remember identification string which can be used by anyone in the world to access my website. After I map my server’s IP address to the domain, I wait for the new entry to propagate. This means that the records for my domain are added/updated in my registrar’s records. When someone wants to visit my website, they type out “n-o-d-e.net” in their address bar of their browser and hit the enter key. In the background, their set name server (usually belonging to the ISP) checks to see who controls records for this domain, and then works its way through the DNS infrastructure to retrieve the IP address matching this domain name and returns it back to you.

It’s a reliable, structured system, but it is still controlled by an organization who has been known to retract domains from whoever they like. What if you could resolve domains without going through this central system? What if there was a way to keep sites readily accessible without some sort of governing organization being in control?

I’m not the first to think of answers to these questions. Years ago, there was a project called Dot-P2P which aimed to offer “.p2p” TLDs to peer-to-peer websites as a way of protecting them against losing their domains. While the project had notable backing by Peter Sunde of The Pirate Bay, it eventually stagnated and dissolved into obscurity.

The organization that would have handled the “.p2p” domain registrations, OpenNIC, is still active and working on an incredible project itself. OpenNIC believes that DNS should be neutral, free, protective of your privacy, and devoid of government intervention. OpenNIC also offers new custom TLDs such as “.geek” and “.free” which you won’t find offered through ICANN. Anyone can apply for a domain and anyone can visit one of the domains registered through OpenNIC provided they use an OpenNIC DNS server, which is also backwards-compatible with existing ICANN-controlled TLDs. No need to say goodbye to your favorite .com or .net sites.

If you have the technical know-how to run your own root name server and submit a request to OpenNIC’s democratic body, you too could manage your own TLD within their established infrastructure.

Other projects like NameCoin aim to solve the issue of revoked domains by storing domain data for its flagship “.bit” TLD within its blockchain. The potential use cases for NameCoin take a radical shift from simple domain registrations when you consider what developers have already implemented for storing assets like user data in the blockchain alongside domain registrations.

But what if I wanted to run my own TLD without anyone’s involvement or support, and still be completely free of ICANN control? Just how easy is it to run your own TLD on your own root name server and make it accessible to others around the world?

INTRODUCTION

It turns out that running your own DNS server and offering custom TLDs is not as difficult as it first appears. Before I set out to work on this project, I listed some key points that I wanted to make sure I hit:

– Must be able to run my own top level domain
– Must be able to have the root server be accessible by other machines
– Must be backwards compatible with existing DNS

Essentially, I wanted my own TLD so I didn’t conflict with any existing domains, the ability for others to resolve domains using my TLD, and the ability for anyone using my DNS to get to all the other sites they would normally want to visit (like n-o-d-e.net).

REQUIRED

For this guide, you are going to need a Linux machine (a virtual machine or Raspberry Pi will work fine). My Linux machine is running Debian. Any Linux distribution should be fine for the job, if you use something other than Debian you may have to change certain commands. You will also want a secondary machine to test your DNS server. I am using a laptop running Windows 7.

Knowledge of networking and the Linux command line may aid you, but is not necessarily required.

CHOOSING A DNS PACKAGE

I needed DNS software to run on my Linux machine, and decided upon an old piece of software called BIND. BIND has been under criticism lately because of various vulnerabilities, so make sure that you read up on any issues BIND may be experiencing and understand the precautions as you would with any other software you may want to expose publicly. I am not responsible if you put an insecure piece of software facing the internet and get exploited.

It is important to note that I will be testing everything for this project on my local network. A similar configuration should work perfectly for any internet-facing server.

Other DNS software exists out there, but I chose BIND because it is something of a standard with thousands of servers running it daily in a production environment. Don’t discount other DNS packages! They may be more robust or secure and are definitely something to consider for a similar project.

HOW-TO GUIDE:

Step 1. Initial Configuration

Connect your Linux machine to the network and check the network interface status.

ifconfig

The response to the command should look similar to this:

eth0      Link encap:Ethernet  HWaddr f0:0d:de:ad:be:ef
                         inet addr:192.168.1.12  Bcast:192.168.1.255  Mask:255.255.255.0
                         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                         RX packets:8209495 errors:0 dropped:386 overruns:0 frame:0
                         TX packets:9097071 errors:0 dropped:0 overruns:0 carrier:0
                         collisions:0 txqueuelen:1000
                         RX bytes:2124485459 (1.9 GiB)  TX bytes:1695684733 (1.5 GiB)

Make sure your system is up-to-date before we install anything.

sudo apt-get update
sudo apt-get upgrade

Step 2. Installing & Configuring BIND

Change to the root user and install BIND version 9. Then stop the service.

su -
apt-get install bind9
/etc/init.d/bind9 stop

Now that BIND is installed and not running, let’s create a new zone file for our custom TLD. For this example, I will be using “.node” as my TLD but feel free to use any TLD of your choosing.

cd /etc/bind
nano node.zone

Paste the following into the file and edit any values you may see fit, including adding any domains with corresponding IP addresses. For a full explanation of these options visit http://www.zytrax.com/books/dns/ch6/mydomain.html which has a nice write-up on the format of a zone file. I did find that I needed to specify a NS SOA record with a corresponding A record or BIND would not start.

As you see below, a lot of this zone file is boilerplate but I did specify a record for “google” which signifies that “google.node” will point to the IP address “8.8.8.8.”

When you are done editing, save the file with CTRL-X.

       ;
       ; BIND data file for TLD “.node”
       ;
       $TTL    604800  ; (1 week)
       @       IN      SOA     node. root.node. (
       2015091220      ; serial (timestamp)
       604800          ; refresh (1 week)
       86400           ; retry (1 day)
       2419200         ; expire (28 days)
       604800 )        ; minimum (1 week)
       ;
       @         IN    NS    ns1.node.    ; this is required
       ;@        IN    A       0.0.0.0         ; unused right now, semicolon comments out the line
       google  IN    A       8.8.8.8
       ns1       IN    A       0.0.0.0         ; this is also required

Now, we need to edit the default zones configuration file to include our new zone.

nano named.conf.default-zones

A the bottom, paste the following block to add our new zone to the configuration.

zone “node.” {
                       type master;
                       file “/etc/bind/node.zone”;
                       allow-transfer { any;};
                       allow-query { any;};
};

Now find the block in the file similar to the below:

zone “.” {
               type hint;
               file “/etc/bind/db.root”;
};

Replace this block with the following to make our root server a slave to master root server 75.127.96.89. This is one of OpenNIC’s public DNS servers and by marking it as a master, we can also resolve OpenNIC TLDs as well as any TLDs under control of ICANN.

zone “.” in {
                  type slave;
                  file “/etc/bind/db.root”;
                  masters { 75.127.96.89; };
                 notify no;
  };

After saving the file, we want to generate a new root hints file which queries OpenNIC. This can be done with the dig command.

dig . NS @75.127.96.89 > /etc/bind/db.root

Finally, restart BIND.

/etc/init.d/bind9 restart

You should see:

[ ok ] Starting domain name service…: bind9.

Configuration on the server on your Linux machine is now done!

Step 3. Configure Other Machines to Use Your Server

On your Windows machine (on the same local network), visit the Network Connections panel by going to Control Panel -> Network and Internet -> Network Connections.

Right-click on your current network connection and select Properties. On the resulting Network Connection Properties dialog, select Internet Protocol Version 4 (TCP/IPv4) if you are using IPv4 for your local network or Internet Protocol Version 6 (TCP/IPv6). Since I am using IPv4, I will be selecting the former.

Next, click the Properties button. On the resulting Internet Protocol Properties dialog, select the radio button for “Use the following DNS server addresses.” Enter the IP address of your Linux machine in the Preferred DNS server box (192.168.1.12 from my example, but make sure you use the IP address of your Linux machine) and then click the OK button. Back on the Network Connection Properties dialog, click the Close button.

Now, load up a command shell and ping one of our defined domains.

ping google.node

You should see the following:

Pinging google.node [8.8.8.8] with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=15ms TTL=55
Reply from 8.8.8.8: bytes=32 time=17ms TTL=55
Reply from 8.8.8.8: bytes=32 time=16ms TTL=55

Congratulations, you now have a DNS server which will not only resolve your custom TLD but be accessible to other machines.

NEXT STEPS

This is just a proof of concept, and could easily be expanded upon for future projects. If you are wondering where to go from here, you could easily move on to make your DNS publicly accessible and expand the offerings. Further, you could construct multiple DNS nodes to act as slaves or links to your root server as a method of distributing the network to make it more reliable and geographically accessible

While I don’t think many BitTorrent trackers will be quick to adopt a system such as this, it still shows that you can create and resolve custom TLDs which may be useful for constructing alternative networks.

SOURCES

http://wiki.opennicproject.org/Tier2ConfigBindHint
http://timg.ws/2008/07/31/how-to-run-your-own-top-level-domain/
http://www.unixmen.com/setup-dns-server-debian-7-wheezy/

––
BY MIKE DANK (@FAMICOMAN)

 

[WANTED] Videostatic (1989)

Through a strange series of links, I have become aware of a 1989 film called Videostatic. Distributed independently for $10/tape, Videostatic looks like some sort of insane hodgepodge of clips and video effects that I am strangely drawn to.

med129

Here is a synopsis written around 1998 from Gareth Branwyn’s Street Tech,

This is a 60-minute audio-visual journey to the edges of alternative art-making and experimental video. The tape is divided up into four sections: “Poems” (intuitive, non-narrative, alogical), “Paintings” (video equivalent to the conventional canvas), “Stories” (Event-based sequences), and “Messages” (rhetorical stances, public “service” announcements). The most impressive pieces here are “Sex with the Dead,” a video memory-jog of our morbidly nostalgic culture by Joe Schwind, “Of Thee I Sing/Sing/Sing,” a musique concrete video by Linda Morgan-Brown and the Tape-Beatles, and “Glossolalia,” (Steve Harp) an absolutely mind-fucking excursion into language, synaesthetic experience, and the structuring of human thought and perception. Surrounded by the curious, the kooky, and the just plain boring (as kook-tech artist Douglass Craft likes to say: “Not every experiment was a success.”) At only $10, this is an insane bargain.

Videostatic compilers John Heck and Lloyd Dunn (of Tape-Beatles’ fame) plan on putting out a series of these tapes. As far as we know, 1989 is the latest release. Write for more info (or to submit material).

ACCESS:
Videostatic
911 North Dodge St.
Iowa City, IA 52245
$10/ 60-minute VHS cassette

I’m looking for this in any format, digital or physical.

I’m not quite sure what I’m in store for.

med130

EDIT: Here is some additional information from the PhotoStatic Archive,

VideoStatic 1989 was released in June, 1989. It is a video compilation along much the same lines as the PhonoStatic cassettes. It contains roughly an hour of video and film work by both networking artists and Iowa City locals. It was edited by John Heck and Lloyd Dunn. At the time of this writing (6/90) VideoStatic 1990 was not yet begun, but plans are underway. It will be edited by Linda-Morgan Brown and Lloyd Dunn.

 

The New Wild West

This article was originally written for and published at N-O-D-E on August 3rd, 2015. It has been posted here for safe keeping.

THE NEW WILD WEST

A few years ago, I was fortunate enough to work professionally with low energy RF devices under a fairly large corporation. We concerned ourselves with wireless mesh networking and were responsible for tying together smart devices, like light bulbs or door locks installed in your home, into an information-driven digital conglomerate. You know those commercials you see on TV where the father remotely unlocks the door for his child or the businesswoman checks to make sure she left the patio light on? That was us. At the touch of a button on your tablet, miles away, you can open the garage door or flip on the air conditioner. These are products that are designed to make life easier.

In research and development, we view things differently than the stressed-out, on-the-go homeowner might. We don’t necessarily think about what the user might want to buy, but ask the question, “when we roll these things out, how will people try to exploit and break them?” In the confines of a tall, mirror-glass office building, my packet sniffer lights up like a Christmas tree. Devices communicate in short bursts through the airwaves, chirping to one another for all to hear. Anyone with the curiosity and some inexpensive hardware can pick up this kind of traffic. Anyone can see what is traveling over the air. Anyone can intervene.

wildwest

EXPLORATION

Things weren’t so different a few decades ago. Back in the ‘70s we saw the rise of the phone phreak. Explorers of the telephone system, these pioneers figured out how to expertly maneuver through the lines, routing their own calls and inching further into the realm of technological discovery. We saw innovators like John Draper and even Steve Wozniak & Steve Jobs peeking into the phone system to see how it ticks and what secrets they could unlock. It wasn’t long before people started connecting their personal microcomputers to the phone line, lovingly pre-installed in their houses for voice communication, and explored computerized telephone switches, VAXen, and other obscure machines — not to mention systems controlled by third parties outside the grasp of good old Ma Bell.

This was the wild west, flooded by console cowboys out to make names for themselves. The systems out there were profoundly unprotected. And why not? Only people who knew about these machines were supposed to be accessing them, no use wasting time to think about keeping things secure. Many machines were simply out there for the taking, with nobody even contemplating how bored teenagers or hobbyist engineers might stumble across them and randomly throw commands over the wire. If you had a computer, a modem, and some time on your hands, you could track down and access these mysterious systems. Entire communities were built around sharing information to get into computers that weren’t your own, and more of these unsecured systems popped up every week. It seemed like the possibilities were endless for the types of machines you would be able to connect to and explore.

Today, many will argue that we focus much more on security. We know that there are those who are going to probe our systems and see what’s open, so we put up countermeasures: concrete walls that we think and hope can keep these minds out. But what about newer technologies? How do we handle the cutting edge? The Internet of Things is still a relatively new concept to most people — an infant in the long-running area of computing. We have hundreds if not thousands of networked devices that we blindly incorporate into our own technological ecosystems. We keep these devices in our homes and on our loved ones. There are bound to be vulnerabilities, insecurities, cracks in the armor.

UBICOMP

Maybe you don’t like the idea of outlets that know what is plugged into them or refrigerators that know when they’re out of food. Maybe you’re a technological hold-out, a neo-luddite, a cautious person who needs to observe and understand before trusting absolutely. This may feel like the ultimate exercise of security and self-preservation, but how much is happening outside of your control?

When the concept of ubiquitous computing was first developed by Mark Weiser at Xerox PARC in the late ‘80s, few knew just how prominent these concepts would be in 25 years. Ubiquitous computing pioneered the general idea of “computing everywhere” through the possibility of small networked devices distributed through day-to-day life. If you have a cellular telephone, GPS, smart watch, or RFID-tagged badge to get into the office, you’re living in a world where ubiquitous computing thrives.

We’ve seen a shift from the centralized systems like mainframes and minicomputers to these smaller decentralized personal devices. We now have machines, traditional personal computers and smart-phones included, that can act independent of a centralized monolithic engine. These devices are only getting smaller, more inexpensive, and more available to the public. We see hobby applications for moisture sensors and home automation systems using off-the-shelf hardware like Arduinos and Raspberry Pis. The technology we play with is becoming more independant and increasingly able when it comes to autonomous communication. Little intervention is needed from an operator, if any is needed at all.

For all of the benefits we see from ubiquitous computing, there are negatives. While having a lot of information at our fingertips and an intuitive process to carry out tasks is inviting, the intrusive nature of the technology can leave many slow to adopt. As technology becomes more ubiquitous, it may also become more pervasive. We like the idea of a smart card to get us on the metro, but don’t take so kindly to knowing we are tracked and filed with every swipe. Our habits have become public record. In the current landscape of the “open data” movement, everything from our cell phone usage to parking ticket history can become one entry in a pool of data that anyone can access. We are monitored whether we realize it or not.

FUTURE

We have entered uncharted territory. As more devices make their way to market, the more possibilities there are for people to explore and exploit them. Sure, some vendors take security into consideration, but nobody ever thinks their system is vulnerable until it is broken. Consider common attacks we see today and how they might ultimately evolve to infect other platforms. How interesting would it be if we saw a DDoS attack that originated from malware found on smart dishwashers? We have these devices that we never consider to be a potential threat to us, but they are just as vulnerable as any other entity on the web.

Consider the hobbyists out there working on drones, or even military applications. Can you imagine a drone flying around, delivering malware to other drones? Maybe the future of botnets is an actual network of infected flying robots. It is likely only a matter of time before we have a portfolio of exploits which can hijack these machines and overthrow control.

Many attacks taken on computer systems in the present day can trace their roots back over decades. We see a lot of the same concepts growing and evolving, changing with the times to be more efficient antagonists. We could eventually see throwbacks to the days of more destructive viruses appear on our modern devices. Instead of popping “arf arf, gotcha!” on the screen and erasing your hard drive, what if we witnessed a Stuxnet-esque exploit that penetrates your washing machine and shrinks your clothes by turning the water temperature up?

I summon images from the first volume of the dystopian Transmetropolitan. Our protagonist Spider Jerusalem returns to his apartment only to find that his household appliance is on drugs. What does this say about our own future? Consider Amazon’s Echo or even Apple’s Siri. Is it only a matter of time before we see modifications and hacks that can cause these machine to feel? Will our computers hallucinate and spout junk? Maybe my coffee maker will only brew half a pot before it decides to no longer be subservient in my morning ritual. This could be a far-off concept, but as we incorporate more smart devices into our lives, we may one day find ourselves incorporated into theirs.

CONCLUSION

Just as we saw 30 years ago, there is now an explosion of new devices ready to be accessed and analyzed by a ragtag generation of tinkerers and experimenters. If you know where to look, there is fruit ripe for the picking. We’ve come around again to a point where the cowboys make their names, walls are broken down, and information is shared openly between those who are willing to find it. I don’t know what the future holds for us as our lives become more intertwined with technology, but I can only expect that people will continue to innovate and explore the systems that compose the world around them.

And with any hope, they’ll leave my coffee maker alone.

––
BY MIKE DANK (@FAMICOMAN)

 

Automating Site Backups with Amazon S3 and PHP

This article was originally written for and published at TechOats on June 24th, 2015. It has been posted here for safe keeping.

BackItUpWithBHPandS3

I host quite a few websites. Not a lot, but enough that the thought of manually backing them up at any regular interval fills me with dread. If you’re going to do something more than three times, it is worth the effort of scripting it. A while back I got a free trial of Amazon’s Web Services, and decided to give S3 a try. Amazon S3 (standing for Simple Storage Service) allows users to store data, and pay only for the space used as opposed to a flat rate for an arbitrary amount of disk space. S3 is also scalable; you never have to worry about running out of a storage allotment, you get more space automatically.

S3 also has a web services interface, making it an ideal tool for system administrators who want to set it and forget it in an environment they are already comfortable with. As a Linux user, there were a myriad of tools out there already for integrating with S3, and I was able to find one to aide my with my simple backup automation.

First things first, I run my web stack on a CentOS installation. Different Linux distributions may have slightly different utilities (such as package managers), so these instructions may differ on your system. If you see anything along the way that isn’t exactly how you have things set up, take the time and research how to adapt the processes I have outlined.

In Amazon S3, before you back up anything, you need to create a bucket. A bucket is simply a container that you use to store data objects within S3. After logging into the Amazon Web Services Console, you can configure it using the S3 panel and create a new bucket using the button provided. Buckets can have different price points, naming conventions, or physical locations around the world. It is best to read the documentation provided through Amazon to figure out what works best for you, and then create your bucket. For our purposes, any bucket you can create is treated the same and shouldn’t cause any problems depending on what configuration you wish to proceed with.

After I created my bucket, I stumbled across a tool called s3cmd which allows me to interface directly with my bucket within S3.

To install s3cmd, it was as easy as bringing up my console and entering:

sudo yum install s3cmd

The application will install, easy as that.

Now, we need a secret key and an access key from AWS. To get this, visit https://console.aws.amazon.com/iam/home#security_credential and click the plus icon next to Access Keys (Access Key ID and Secret Access Key). Now, you can click the button that states Create New Access Key to generate your keys. They should display in a pop-up on the page. Leave this pop-up open for the time being.

Back to your console, we need to edit s3cmd’s configuration file using your text editor of choice, located in your user’s home directory:

nano ~/.s3cfg

The file you are editing (.s3cfg) needs both the access key and the secret key from that pop-up you saw earlier on the AWS site. Edit the lines beginning with:

access_key = XXXXXXXXXXXX
secret_key = XXXXXXXXXXXX

Replacing each string of “XXXXXXXXXXXX” with your respective access and secret keys from earlier. Then, save the file (CTRL+X in nano, if you are using it).

Now we are ready to write the script to do the backups. For the sake of playing different languages, I chose to write my script using PHP. You could accomplish the same behavior using Python, Bash, Perl, or other languages, though the syntax will differ substantially. First, our script needs a home, so I created a backup directory to house the script and any local backup files I create within my home directory. Then, I changed into that directory and started editing my script using the commands below:

mkdir backup
cd backup/
nano backup.php

Now, we’re going to add some code to our script. I’ll show an example for backing up one site, though you can easily duplicate and modify the code for multiple site backups. Let’s take things a few lines at a time. The first line starts the file. Anything after <?php is recognized as PHP code. The second line sets our time zone. You should use the time zone of your server’s location. It’ll help us in the next few steps.

<?php
date_default_timezone_set('America/New_York');

So now we dump our site’s database by executing the command mysqldump through PHP. If you don’t run MySQL, you’ll have to modify this line to use your database solution. Replace the username, password, and database name on this line as well. This will allow you to successfully backup the database and timestamp it for reference. The following line will archive and compress your database dump using gzip compression. Feel free to use your favorite compression in place of gzip. The last line will delete the original .sql file using PHP’s unlink, since we only need the compressed one.

exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");

The next line will archive and gzip your site’s web directory. Make sure you check the directory path for your site, you need to know where the site lives on your server.

exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");

Now, an optional line. I didn’t want to keep any web directory backups older than three months. This will delete all web directory backups older than that. You can also duplicate and modify this line to remove the database archives, but mine don’t take up too much space, so I keep them around for easy access.

@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");

Now the fun part. These commands will push the backups of your database and web directory to your S3 bucket. Be sure to replace U62 with your bucket name.

exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");

Finally, end the file, closing that initial <?php tag.

?>

Here it is all put together (in only ten lines!):

<?php
date_default_timezone_set('America/New_York');
exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");
@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");
exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");
?>

Okay, now our script is finalized. You should now save it and run it with the command below in your console to test it out!

php backup.php

Provided you edited all the paths and values properly, your script should push the two files to S3! Give yourself a pat on the back, but don’t celebrate just yet. We don’t want to have to run this on demand every time we want a backup. Luckily we can automate the backup process. Back in your console, run the following command:

crontab -e

This will load up your crontab, allowing you to add jobs to Cron: a time based scheduler. The syntax of Cron commands is out of the scope of this article, but the information is abundant online. You can add the line below to your crontab (pay attention to edit the path of your script) and save it so it will run on the first of every month.

0 0 1 * * /usr/bin/php /home/famicoman/backup/backup.php

… 

 

[WANTED] How to Build A Red Box VHS

I was looking though old issues of Blacklisted! 411 and found an advertisement in a 1995 issue for a 60 minute VHS tape about how to build a red box using a Radio Shack pocket tone dialer. For those who don’t know, red boxes were popular in the ’90s and used by phreakers, scammers, and those who just wanted free payphone calls. By modifying pocket dialers (or even just recording sounds that coins made as they were dropped into a phone), anyone could make a red box which would mimic the tones produced when coins were inserted into a payphone. This means that anywhere you take your red box, you can play back the tones and get free phone calls.

Anyway, this video was made and sold in 1995 by East America Company in Englewood, New Jersey. It retailed for $39 (Plus $5 shipping) and I would love a copy. See the image below for a review of the tape and the original advertisement.

redbox