I Wrote An App

I’ve been putting off this post for a while. Not for any reason in particular, I just like to have things arranged in a certain way before I push them out to people.

This is analogous to the mobile app project this post refers to as a whole. In 2013, with the idea of a friend, I created a mobile application that allows a user to send a random insulting text to someone on their contacts list. It was for fun of course, and we called it BitchyTexts. It was (and still is) Android-only, and was developed over the course of a few weeks on the little time I had between classes. I distributed it to my friends, who distributed it to their friends, and the results were mostly positive. It was crude, and thrown together, but it worked and did its job well.

The next logical step of course was a Play Store release. However, I needed to clean my code up, get things under version control. and brave the submission process. I worked a little here and there, but ultimately getting the app out the door fell to the bottom of my priority list. In late 2015, two years after I decided I wanted to do a Play Store release, I picked development back up again and started knocking out little pieces here and there to reach my desired outcome.

This became one of my 2016 goals, and I was chomping at the bit to release something. There was no use sitting on it, store releases are an iterative process and I could always improve here and there after the application was live.

So, I submitted it. It was approved, and it’s out there for anyone to download and use. There are changes I want to make, and there are other things I want to work on for it (An improved website, back-end services, etc.) but those can come at any time. There is a lot of planning to do, but nothing too crazy.

BitchyTexts in action!

BitchyTexts in action!

Check it out here, https://play.google.com/store/apps/details?id=com.bt.bitchytexts

Let me know what you think!

 

I2P 101 – Inside the Invisible Internet

This article was originally written for and published at N-O-D-E on May 1st, 2016. It has been posted here for safe keeping.

I2P 101 – INSIDE THE INVISIBLE INTERNET

The Invisible Internet Project (more commonly known as I2P) is an older, traditional darknet built from the ground up with privacy and security in mind. As with all darknets, accessing an I2P site or service is not as simple as firing a request off from your web browser as you would with any site on the traditional Internet (the clearnet). I2P is only accessible if you are running software built to access it. If you try to access an I2P service without doing your homework, you won’t be able to get anywhere. Instead of creating all new physical networking infrastructure, I2P builds upon the existing Internet to take care of physical connections between machines, creating what is known as an overlay network. This is similar to the concept of a virtual private network (VPN) wherein computers can communicate with one another comfortably, as though they were on a local area network, even though they may be thousands of miles apart.

2nzxPzu

INTRODUCTION

I2P was first released in early 2003 (only a few months after the initial release of Tor), and was designed as a communication layer for existing Internet services such as HTTP, IRC, email, etc. Unlike the clearnet, I2P focuses on anonymity and peer-to-peer communications, relying on a distributed architecture model. Unlike Tor which is based around navigating the clearnet through the Tor network, I2P’s goal from the start was to create a destination network and was developed as such. Here, we see that the focus is on community and anonymity within it as opposed to anonymity when using the clearnet.

ROUTERS, INPROXIES & OUTPROXIES

When you connect to I2P, you are automatically set up to be a router. If you are a router, you exist as a node on the network and participate in directing or relaying the flow of data. As long as you are on the network, you are always playing a part in keeping the traffic flowing. Other users may choose to configure their nodes as inproxies. Think of an inproxy as a way to get to an I2P service from the clearnet. For example, if you wanted to visit an eepsite (An anonymous site hosted on I2P, designated by a .i2p TLD) but we’re not on I2P, you could visit an inproxy through the clearnet to provide you access. Other users may choose to operate outproxies. An outproxy is essentially an exit node. If you are on I2P and want to visit a clearnet site or service, your traffic is routed through an outproxy to get out of the network.

ADVANTAGES

There are numerous advantages to using I2P over another darknet such as Tor depending upon the needs of the user. With I2P, we see a strong focus on the anonymity of connections as all I2P tunnels are unidirectional. This means that separate lines of communication are opened for sending and receiving data. Further, tunnels are short-lived, decreasing the amount of information an attacker or eavesdropper could have access to. We also see differences in routing as I2P uses packet switching as opposed to circuit switching. In packet switching routing, messages are load balanced among multiple peers to get to the destination instead of a single route typical of circuit switching. In this scenario, I2P sees all peers participating in routing. I2P also implements distributed dissemination of network information. Peer information is dynamically and automatically shared across nodes instead of living on a centralized server. Additionally, we also see low overhead for running a router because every node is a router instead of a low percentage of those who choose to set one up.

GARLIC ROUTING

I2P implements garlic routing as opposed to the more well known onion routing. Both garlic routing and onion routing rely on the technique of layered encryption. On the network, traffic flows through a series of peers on the way to its final destination. Messages are encrypted multiple times by the originator using the peers’ public keys. When the message is sent out on the path and decrypted by the proper corresponding peer in the sequence, only enough information to pass the message to the next node is exposed until the message reaches its destination where the original message and routing instructions are revealed. The initial encrypted message is layered and resembles an onion that has its layers peeled back on transit.

Garlic routing extends this concept by grouping messages together. Multiple messages referred to as “bulbs” are bound together, each with its own routing instructions. This bundle is then layered just like with onion routing and sent off to peers on the way to the destination. There is no set size for how many messages are included in one bundle, providing another level of complexity in message delivery.

INSIDE THE NETWORK

Hundreds of sites and services exist for use within the I2P network, completely operated by the community. For example, Irc2P is the premier IRC network for chat. We see search engines like eepSites & Epsilon, and torrent trackers like PaTracker. Social networks like Id3nt (for microblogging) and Visibility (for publishing) are also abundant. If you can think of a service that can run on the network, it may already be operational.

FUTURE

I2P remains in active development with many releases per year and continues to be popular within its community. While I2P is not as popular as other darknets such as Tor, it remains to be a staple of alternative networks and is often praised for its innovative concepts.Though I2P does not focus on anonymous use of the clearnet, it is seeing active use for both peer-to-peer communication and file-sharing services.

CONCLUSION

While many may view I2P as just another darknet, it has many interesting features that aren’t readily available or implemented on other networks. Due to the community and regular updates, there is no reason to think that I2P will be going anywhere anytime soon and will only continue to grow with more awareness and support.

Over time, more and more people have embraced alternative networks and we are bound to see more usage on the horizon. However one of the points I2P maintainers express is that the network’s small size and limited adoption may be helpful at this point in time. I2P is not as prominent in the public’s field of view, possibly protecting it from negative publicity and potential attackers.

Whether or not I2P will keep hold of its core community or expand and change with time is unknown, but for now it proves to be a unique darknet implementation with a lot of activity.

SOURCES

https://geti2p.net/en/comparison/tor
https://www.ivpn.net/privacy-guides/an-introduction-to-tor-vs-i2p
https://geti2p.net/en/about/intro
https://geti2p.net/en/docs/how/garlic-routing

––
BY MIKE DANK (@FAMICOMAN)

 

Hyperboria 101 – Moving Through The Mesh

This article was originally written for and published at N-O-D-E on February 14th, 2016. It has been posted here for safe keeping.

HYPERBORIA 101 – MOVING THROUGH THE MESH

Hyperboria is a network built as an alternative to the traditional Internet. In simple terms, Hyperboria can be thought of as a darknet, meaning it is running on top of or hidden from the existing Internet (the clearnet). If you have ever used TOR or I2P, it is a similar concept. Unlike the Internet, with thousands of servers you may interact with on a day-to-day basis, access to Hyperboria is restricted in the sense that you need specific software, as well as someone already on the network, to access it. After configuring the client, you connect into the network, providing you with access to each node therein.

WF8CEM9

INTRODUCTION

Hyperboria isn’t just any alternative network, it’s decentralized. There is no central point of authority, no financial barrier of entry, and no government regulations. Instead, there is a meshnet; peer-to-peer connection with user controlled nodes and connected links. Commonly, mesh networks are seen in wireless communication. Access points are configured to link directly with other access points, creating multiple connections to support the longevity of the network infrastructure and the traffic traveling over it. More connections between nodes within the network is better than less here. With this topology, all nodes are treated equally. This allows networks to be set up inexpensively without the infrastructure needed to run a typical ISP, which usually has user traffic traveling up several gateways or routers owned by other companies.

But what is the goal of the Hyperboria network? With roots in Reddit’s /r/darknetplan, we see that the existing Internet has issues with censorship, government control, anonymity, security, and accessibility. /r/darknetplan has a lofty goal of creating a decentralized alternative to the Internet as we know it through a scalable stack of commodity hardware and open source software. This shifts the infrastructure away from physical devices owned by internet service providers, and instead puts hardware in the hands of the individual. This in itself is a large undertaking, especially considering the physical distance between those interested in joining the network, and the complexities of linking them together.

While the ultimate idea is a worldwide wireless mesh connecting everyone, it won’t happen overnight. In the meantime, physical infrastructure can be put in place by linking peers together over the existing Internet through an overlay network. In time with more participation, wireless coverage between peers will improve to the point where more traffic can flow over direct peer-to-peer wireless connections.

ADVANTAGES

The Hyperboria network relies upon a piece of software called cjdns to connect nodes and route traffic. Cjdns’ project page boasts that it implements “an encrypted IPv6 network using public-key cryptography for address allocation and a distributed hash table for routing.” Essentially, the application will create a tunnel interface on a host computer that acts as any other network interface (like an ethernet or wifi adapter). This is powerful in the way that is allows any existing services you might want to face a network (HTTP server, BitTorrent tracker, etc.) to run as long as that service is already compatible with IPv6. Additionally, cjdns is what is known as a layer 3 protocol, and is agnostic towards how the host connects to peers. It doesn’t matter much if the peer we need to connect to is over the internet or a physical access point across the street.

All traffic over Hyperboria is encrypted end-to-end, stopping eavesdroppers operating rogue nodes. Every node on the network receives a unique IPv6 address, which is derived from that node’s public key after the public/private keypair is generated. This eliminates the need for additional encryption configuration and creates an environment with enough IP addresses for substantial network expansion. As the network grows in size, the quality of routing also improves. With more active nodes, the number of potential routes increases to both mitigate failure and optimize the quickest path from sender to receiver.

Additionally, there are no authorities such as the Internet Assigned Numbers Authority (IANA) who on the Internet control features like address allocation and top level domains. Censorship can easily be diminished. Suppose someone is operating a node hosting content that neighboring nodes find offensive, so they refuse to provide access. As long as that node operator can find at least one person somewhere on the network to peer with, he can continue making his content accessible to the whole network.

PEERING

One of the main differences between Hyperboria and networks like TOR is how connection to the network is made. Out of the box, running the cjdns client alone will not provide access to anything.

To be able to connect to the network, everyone must find someone to peer with; someone already on Hyperboria. This peer provides the new user with clearnet credentials for his node (an ip address, port number, key, and password) and the new user enters them into his configuration file. If all goes to plan, restarting the client will result in successful connection to the peer, providing the user access to the network.

However, having just one connection to Hyperboria doesn’t create a strong link. Consider what would happen if this node was experiencing an outage or was taken offline completely. The user and anyone connecting to him as an uplink into the network would lose access. Because of this, users are encouraged to find multiple peers near them to connect to.

In theory, everyone on the network should be running their node perpetually. If a user only launched cjdns occasionally, other nodes on the network will not be able to take advantage of routing through the user’s node as needed.

With the peering system, there is no central repository of node information. Nobody has to know anyone’s true identity, or see who is behind a particular node. All of the connections are made through user-to-user trust when establishing a new link. If for any reason a node operator were to become abusive to other nodes on the network, there is nothing stopping neighboring nodes from invalidating the credentials of the abuser, essentially kicking them off of the network. If any potential new node operator seemed malicious, other operators have the right to turn him away.

MESHLOCALS

The most important aspect of growing the Hyperboria network is to build meshlocals in geographically close communities. Consider how people would join Hyperboria without knowing about their local peers. Maybe someone in New York City would connect to someone in Germany, or someone in San Franscisco to someone in Philadelphia. This creates suboptimal linking as the two nodes in each example are geographically distant from each other.

The concept of a meshlocal hopes to combat this problem. Users physically close together are encouraged to form working groups and link their nodes together. Additionally, these users work together to seek new node operators with local outreach to grow the network. Further, meshlocals themselves can coordinate with one another to link together, strengthening regional areas.

Meshlocals can also offer more in-person communication, making it easier to configure wireless infrastructure between local nodes, or organize actions via a meetup. Many meshlocals have gone on to gain active followings in their regions, for example NYC Mesh and Seattle Meshnet.

INSIDE THE NETWORK

After connecting to Hyperboria, a user may be at a loss as to what he is able to do. All of these nodes are connected and working together, but what services are offered by the Hyperboria community, for the Hyperboria community? Unlike the traditional Internet, most services on Hyperboria are run non-commercially as a hobby.

For example, Hyperboria hosts Uppit: a Reddit clone, Social Node: a Twitter-like site,and HypeIRC: an IRC network. Some of these services may additionally be available on the clearnet, making access easy for those without a connection to Hyperboria. Others are Hyperboria-only, made specifically and only for the network.

As the network grows, more services are added while some fade away in favor of new ones or disrepair. This is all community coordinated after all; there is nothing to keep a node operator from revoking access to his node on a whim for any reason.

FUTURE

As previously mentioned, the ultimate goal of Hyperboria is to offer a replacement for the traditional Internet, built by the users. As it stands now, Hyperboria has established a core following and will see more widespread adoption as meshlocals continue to grow and support users.

Additionally, we see new strides in the development of cjdns with each passing year. As time has gone on, setup and configuration have becomes simpler for the end-user while compatibility has also improved. The more robust the software becomes, the easier it will be to run and keep running.

We also see the maturation of other related technologies. Wireless routers are becoming more inexpensive with more memory and processing power, suitable for running cjdns directly. We also see the rise of inexpensive, small form factor microcomputers like the Raspberry Pi and Beaglebone Black, allowing anyone to buy a functional, dedicated computer for the price of a small household appliance like an iron or coffee maker. Layer 2 technologies like B.A.T.M.A.N. Advanced are also growing, making easily-configurable wireless mesh networks simple to set up and work cooperatively with the layer 3 cjdns.

CONCLUSION

Hyperboria is an interesting exercise in mesh networking with an important end goal and exciting construction appealing to network professionals, computer hobbyists, digital activists, and developers alike.

It’ll be interesting to see how Hyperboria grows over the next few years, and if it is indeed able to offer a robust Internet-alternative for all. Until then, we ourselves can get our hands dirty setting up hardware, developing software, and helping others do the same. With any luck, we will be able to watch it grow. One node at a time.

SOURCES

https://github.com/cjdelisle/cjdns
https://docs.meshwith.me
https://www.reddit.com/r/darknetplan/comments/1vq87d/project_meshnet_for_everyone_a_complete/
https://www.reddit.com/r/dorknet/comments/xry23/this_is_my_first_time_hearing_about_darknet_i/

––
BY MIKE DANK (@FAMICOMAN)

 

irssi-hilighttxt.pl – An irssi Plugin That SMS Messages You On Hilight

A few months ago after configuring irssi with all the IRC channels I wanted, I ran into the problem of being late to a conversation. Every few days I would check my channels only to see people reaching out to me when I wasn’t around. Sometimes I was able to ping someone to talk, other times the person left and never came back.

I had been using the faithful hilightwin.pl plugin to put all my hilights in a separate window I could monitor. I figured that with my limited knowledge of perl I could rig up something to send me an SMS text message instead of writing the hilight line to a different window in irssi where i may not get to it in time.

Using TextBelt’s free API, I was able to call a curl command from inside perl to send the message triggering my hilight to my mobile phone. It isn’t perfect, as there is some garbled text at the front of the message, but I get the message quickly and I can see not only who sends it but also the channel they are in.

Sensible text messages delivered!

Sensible text messages delivered!

I’ve put the code up on GitHub for anyone to use or improve upon. TextBelt’s API is a little limited in how many messages you can receive in a short period of time (as it should be to prevent abuse) and doesn’t support many carriers outside of the USA, so there is definitely room for improvement if another suitable API was found.

Check it out and let me know what you think!

 

(Re)Hacking a Boxee Box

I recently purchased an Amazon Fire TV Stick and love that it allows the ability to sideload applications like Kodi (I still hate that name, long live XBMC!) for media streaming. I mainly use Samba/SMB shares on my network for my media, with most of my content living on an old WDTV Live Hub. The WDTV Hub works great and is still pretty stable after all of these years (except for a few built-in apps like YouTube, I wish they kept going with updates), and the Fire TV will gladly chug away, playing any video over the network. However, I had the need to have my media stream to a third television and I didn’t want to uproot an existing device and carry it from room to room.

So I needed a third device. I already have a second generation Roku kicking around, but it doesn’t appear to be able to run anything other than the stock software at this time. I also considered a Raspberry Pi and wifi dongle, but this puts the price up to around $50 (which is more than the Fire TV Stick. I do want something cheap). I looked for a less expensive option with older media streamers and found a lot of information about the Boxee Box appliance put out by D-Link in 2008, discontinued in 2011. I first encountered this box in around 2012 when I was tasked to do some reverse engineering on it, but that’s another story. In the time since, a Google TV hacking team figured out they could do simple shell command injection when setting the Box’s host name, which eventually evolved into a group developing Boxee+Hacks, a replacement operating system. Since Boxee+Hacks, other developers have been working on a port of Kodi which you can install onto the Boxee to give you more options and better compatibility over the operating system’s built in features.

After some eBaying, I was able to get a Boxee for around $15, shipping included (Make sure you get model DSM-380!). The item description said that the box already had Boxee+Hacks installed and upgraded to the latest version, so I figured I was on my way to a quick installation of Kodi and could get up and running in minutes.

When I first booted the Boxee and checked out the Boxee+Hacks settings, I noticed that the device only had version 1.4 installed while the latest available was 1.6. The built-in updater did not work anymore, so the box never reported that there was an available Boxee+Hacks update.Navigating the Boxee+Hacks forums was a little cumbersome, but I eventually found the steps I needed to get updated and launch Kodi. I’ve outlined them below to help any other lost travelers out there.

First, though, go through your Boxee settings and clear any thumbnail caches, local file databases, etc. We need all the free space we can get and there will be installation errors if you don’t have enough free space. The installation script we will run later automatically clears the device’s temp directory, but doesn’t remove these cached files.

On the Boxee, go to Settings –> Network –> Servers and enable Windows file sharing.

If you already have Boxee+Hacks, connect the box and your computer to your home network and check the IP address for the box on either the Boxee’s settings page or by checking for a new device on your router’s console.

To make things really easy, telnet to your Boxee on port 2323 using your box’s IP address (Mine is 192.168.1.100).

 telnet 192.168.1.100 2323

Once there, we need to download and run the installer script.

curl -L http://tinyurl.com/boxeehacks | sh

If you DO NOT have Boxee+Hacks installed already, never fear. On the same Settings –> Network –> Servers page on your Boxee, locate the Hostname filed and enter the following into it.

boxeebox;sh -c 'curl -L tinyurl.com/boxeehacks | sh'

Then, navigate away from the Settings page.

After executing the command through telnet, or through the Boxee settings page, the logo should glow red on the front of the box and you should receive on-screen instructions to perform an installation.

Boxee+Hacks installation screen, from http://boxeed.in/forums/viewtopic.php?f=5&t=1216

Boxee+Hacks installation screen, from boxeed.in forums.

The installation guide works pretty well. Here, you will be prompted to install Kodi in addition to Boxee+Hacks. At this point I chose NOT to install Kodi. From what I read, once you install it though the script, it can be difficult to remove, and I didn’t want to deal with the possibilities of a difficult upgrade.

Instead, I decided to install Kodi on a flash drive. I’ve had a cheap 512MB drive that has been kicking around for close to ten years, and it is perfect for fitting Kodi. To setup the flash drive, I formatted it as FAT32 and labeled the drive as MEDIA. I’m not sure if either of these matter, but this configuration worked for me. I downloaded the latest Kodi release built for Boxee from the boxeebox-xbmc repository (Version KODI_14.2-Git-2015-10-20-880982d-hybrid at the time of this writing) and unzipped it onto my flash drive. Make sure that the all of the Kodi files are in the root directory of the drive, and not within the KODI_14.2-Git-2015-10-20-880982d-hybrid directory you get from extracting the archive.

It might also help to label the drive

It might also help to label the drive

That’s all there is to it, just plug the flash drive into the back of the Boxee and it is good to go. If you leave the flash drive in, whenever you boot the Boxee it will go right into Kodi. Leave it out and it will boot to standard Boxee+Hacks. If you boot into Boxee+Hacks and then want to load up Kodi, just plug in the flash drive and it loads automatically.

This turns a seemingly unassuming and thought-obsolete device into a pretty powerful media center, and is a quick inexpensive solution to streaming your content to yet another television.

 

rtmbot-archivebotjr – A Slack Bot for Archiving

I’ve been working with the idea of trying to archive more things when I’m on the go. Sometimes I find myself with odd pockets of time like 10 minutes on a train platform or a few minutes leftover at lunch that I tend to spend browsing online. Inevitably, I find something I want to download later and tuck the link away, usually forgetting all about it.

Recently, I’ve been using Slack for some team collaboration projects (Slack is sort of like IRC in a nice pretty package, integrating with helpful online services) and was wondering how I could leverage it for some on-the-go archiving needs.

Slack has released their own bot, python-rtmbot on GitHub that you can run on your own server and pull into your Slack site to do bot things. The bot includes a few sample plugins (written in Python), but I went about creating my own to get some remote archiving features and scratch my itch.

The fruit of my labor also lives on GitHub as rtmbot-archivebotjr. This is not to be confused with Archive Team’s ArchiveBot (I just stink at unique names). archivebotjr will sit in your Slack channels waiting for you to give it a command. The most useful are likely !youtube-dl (for downloading youtube videos in the highest quality), !wget (for downloading things through wget. Great when I find a disk image and don’t want to download it on my phone), and !torsocks-wget (Like !wget but over TOR). I have a few more in there for diagnostics (!ping and !uptime), but you can see a whole list on the GitHub page.

Screenshot_2016-02-25-09-55-50

Right now, the bot is basic and lacks a wide array of features. The possibilities for other tools that can link into this are endless, and I hope to link more in periodically. Either way, you can easily download all sorts of files relatively easily and the bot seems reasonably stable for an initial release.

If you can fit this bot into your archiving workflow, try it out and let me know how it goes. Can it better fit your needs? Is something broken? Do you want to add a feature?

I want to hear about it!

 

How to Run your Own Independent DNS with Custom TLDs

This article was originally written for and published at N-O-D-E on September 9th, 2015. It has been posted here for safe keeping.

HOW TO RUN YOUR OWN INDEPENDENT DNS WITH CUSTOM TLDS

BACKGROUND

After reading what feels like yet another article about a BitTorrent tracker losing its domain name, I started to think about how trackers could have an easier time keeping a stable domain if they didn’t have to register their domain through conventional methods Among their many roles, The Internet Corporation for Assigned Names and Numbers (ICANN), controls domain names on the Internet and are well known for the work with the Domain Name System (DNS) specifically the operation of root name servers and governance over top level domains (TLDs).

If you ever register a domain name, you pick a name you like and head over to an ICANN-approved registrar. Let’s say I want my domain to be “n-o-d-e.net”. I see if I can get a domain with “n-o-d-e” affixed to the TLD “.net” and after I register it, I’m presented with an easy-to-remember identification string which can be used by anyone in the world to access my website. After I map my server’s IP address to the domain, I wait for the new entry to propagate. This means that the records for my domain are added/updated in my registrar’s records. When someone wants to visit my website, they type out “n-o-d-e.net” in their address bar of their browser and hit the enter key. In the background, their set name server (usually belonging to the ISP) checks to see who controls records for this domain, and then works its way through the DNS infrastructure to retrieve the IP address matching this domain name and returns it back to you.

It’s a reliable, structured system, but it is still controlled by an organization who has been known to retract domains from whoever they like. What if you could resolve domains without going through this central system? What if there was a way to keep sites readily accessible without some sort of governing organization being in control?

I’m not the first to think of answers to these questions. Years ago, there was a project called Dot-P2P which aimed to offer “.p2p” TLDs to peer-to-peer websites as a way of protecting them against losing their domains. While the project had notable backing by Peter Sunde of The Pirate Bay, it eventually stagnated and dissolved into obscurity.

The organization that would have handled the “.p2p” domain registrations, OpenNIC, is still active and working on an incredible project itself. OpenNIC believes that DNS should be neutral, free, protective of your privacy, and devoid of government intervention. OpenNIC also offers new custom TLDs such as “.geek” and “.free” which you won’t find offered through ICANN. Anyone can apply for a domain and anyone can visit one of the domains registered through OpenNIC provided they use an OpenNIC DNS server, which is also backwards-compatible with existing ICANN-controlled TLDs. No need to say goodbye to your favorite .com or .net sites.

If you have the technical know-how to run your own root name server and submit a request to OpenNIC’s democratic body, you too could manage your own TLD within their established infrastructure.

Other projects like NameCoin aim to solve the issue of revoked domains by storing domain data for its flagship “.bit” TLD within its blockchain. The potential use cases for NameCoin take a radical shift from simple domain registrations when you consider what developers have already implemented for storing assets like user data in the blockchain alongside domain registrations.

But what if I wanted to run my own TLD without anyone’s involvement or support, and still be completely free of ICANN control? Just how easy is it to run your own TLD on your own root name server and make it accessible to others around the world?

INTRODUCTION

It turns out that running your own DNS server and offering custom TLDs is not as difficult as it first appears. Before I set out to work on this project, I listed some key points that I wanted to make sure I hit:

– Must be able to run my own top level domain
– Must be able to have the root server be accessible by other machines
– Must be backwards compatible with existing DNS

Essentially, I wanted my own TLD so I didn’t conflict with any existing domains, the ability for others to resolve domains using my TLD, and the ability for anyone using my DNS to get to all the other sites they would normally want to visit (like n-o-d-e.net).

REQUIRED

For this guide, you are going to need a Linux machine (a virtual machine or Raspberry Pi will work fine). My Linux machine is running Debian. Any Linux distribution should be fine for the job, if you use something other than Debian you may have to change certain commands. You will also want a secondary machine to test your DNS server. I am using a laptop running Windows 7.

Knowledge of networking and the Linux command line may aid you, but is not necessarily required.

CHOOSING A DNS PACKAGE

I needed DNS software to run on my Linux machine, and decided upon an old piece of software called BIND. BIND has been under criticism lately because of various vulnerabilities, so make sure that you read up on any issues BIND may be experiencing and understand the precautions as you would with any other software you may want to expose publicly. I am not responsible if you put an insecure piece of software facing the internet and get exploited.

It is important to note that I will be testing everything for this project on my local network. A similar configuration should work perfectly for any internet-facing server.

Other DNS software exists out there, but I chose BIND because it is something of a standard with thousands of servers running it daily in a production environment. Don’t discount other DNS packages! They may be more robust or secure and are definitely something to consider for a similar project.

HOW-TO GUIDE:

Step 1. Initial Configuration

Connect your Linux machine to the network and check the network interface status.

ifconfig

The response to the command should look similar to this:

eth0      Link encap:Ethernet  HWaddr f0:0d:de:ad:be:ef
                         inet addr:192.168.1.12  Bcast:192.168.1.255  Mask:255.255.255.0
                         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                         RX packets:8209495 errors:0 dropped:386 overruns:0 frame:0
                         TX packets:9097071 errors:0 dropped:0 overruns:0 carrier:0
                         collisions:0 txqueuelen:1000
                         RX bytes:2124485459 (1.9 GiB)  TX bytes:1695684733 (1.5 GiB)

Make sure your system is up-to-date before we install anything.

sudo apt-get update
sudo apt-get upgrade

Step 2. Installing & Configuring BIND

Change to the root user and install BIND version 9. Then stop the service.

su -
apt-get install bind9
/etc/init.d/bind9 stop

Now that BIND is installed and not running, let’s create a new zone file for our custom TLD. For this example, I will be using “.node” as my TLD but feel free to use any TLD of your choosing.

cd /etc/bind
nano node.zone

Paste the following into the file and edit any values you may see fit, including adding any domains with corresponding IP addresses. For a full explanation of these options visit http://www.zytrax.com/books/dns/ch6/mydomain.html which has a nice write-up on the format of a zone file. I did find that I needed to specify a NS SOA record with a corresponding A record or BIND would not start.

As you see below, a lot of this zone file is boilerplate but I did specify a record for “google” which signifies that “google.node” will point to the IP address “8.8.8.8.”

When you are done editing, save the file with CTRL-X.

       ;
       ; BIND data file for TLD “.node”
       ;
       $TTL    604800  ; (1 week)
       @       IN      SOA     node. root.node. (
       2015091220      ; serial (timestamp)
       604800          ; refresh (1 week)
       86400           ; retry (1 day)
       2419200         ; expire (28 days)
       604800 )        ; minimum (1 week)
       ;
       @         IN    NS    ns1.node.    ; this is required
       ;@        IN    A       0.0.0.0         ; unused right now, semicolon comments out the line
       google  IN    A       8.8.8.8
       ns1       IN    A       0.0.0.0         ; this is also required

Now, we need to edit the default zones configuration file to include our new zone.

nano named.conf.default-zones

A the bottom, paste the following block to add our new zone to the configuration.

zone “node.” {
                       type master;
                       file “/etc/bind/node.zone”;
                       allow-transfer { any;};
                       allow-query { any;};
};

Now find the block in the file similar to the below:

zone “.” {
               type hint;
               file “/etc/bind/db.root”;
};

Replace this block with the following to make our root server a slave to master root server 75.127.96.89. This is one of OpenNIC’s public DNS servers and by marking it as a master, we can also resolve OpenNIC TLDs as well as any TLDs under control of ICANN.

zone “.” in {
                  type slave;
                  file “/etc/bind/db.root”;
                  masters { 75.127.96.89; };
                 notify no;
  };

After saving the file, we want to generate a new root hints file which queries OpenNIC. This can be done with the dig command.

dig . NS @75.127.96.89 > /etc/bind/db.root

Finally, restart BIND.

/etc/init.d/bind9 restart

You should see:

[ ok ] Starting domain name service…: bind9.

Configuration on the server on your Linux machine is now done!

Step 3. Configure Other Machines to Use Your Server

On your Windows machine (on the same local network), visit the Network Connections panel by going to Control Panel -> Network and Internet -> Network Connections.

Right-click on your current network connection and select Properties. On the resulting Network Connection Properties dialog, select Internet Protocol Version 4 (TCP/IPv4) if you are using IPv4 for your local network or Internet Protocol Version 6 (TCP/IPv6). Since I am using IPv4, I will be selecting the former.

Next, click the Properties button. On the resulting Internet Protocol Properties dialog, select the radio button for “Use the following DNS server addresses.” Enter the IP address of your Linux machine in the Preferred DNS server box (192.168.1.12 from my example, but make sure you use the IP address of your Linux machine) and then click the OK button. Back on the Network Connection Properties dialog, click the Close button.

Now, load up a command shell and ping one of our defined domains.

ping google.node

You should see the following:

Pinging google.node [8.8.8.8] with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=15ms TTL=55
Reply from 8.8.8.8: bytes=32 time=17ms TTL=55
Reply from 8.8.8.8: bytes=32 time=16ms TTL=55

Congratulations, you now have a DNS server which will not only resolve your custom TLD but be accessible to other machines.

NEXT STEPS

This is just a proof of concept, and could easily be expanded upon for future projects. If you are wondering where to go from here, you could easily move on to make your DNS publicly accessible and expand the offerings. Further, you could construct multiple DNS nodes to act as slaves or links to your root server as a method of distributing the network to make it more reliable and geographically accessible

While I don’t think many BitTorrent trackers will be quick to adopt a system such as this, it still shows that you can create and resolve custom TLDs which may be useful for constructing alternative networks.

SOURCES

http://wiki.opennicproject.org/Tier2ConfigBindHint
http://timg.ws/2008/07/31/how-to-run-your-own-top-level-domain/
http://www.unixmen.com/setup-dns-server-debian-7-wheezy/

––
BY MIKE DANK (@FAMICOMAN)

 

Automating Site Backups with Amazon S3 and PHP

This article was originally written for and published at TechOats on June 24th, 2015. It has been posted here for safe keeping.

BackItUpWithBHPandS3

I host quite a few websites. Not a lot, but enough that the thought of manually backing them up at any regular interval fills me with dread. If you’re going to do something more than three times, it is worth the effort of scripting it. A while back I got a free trial of Amazon’s Web Services, and decided to give S3 a try. Amazon S3 (standing for Simple Storage Service) allows users to store data, and pay only for the space used as opposed to a flat rate for an arbitrary amount of disk space. S3 is also scalable; you never have to worry about running out of a storage allotment, you get more space automatically.

S3 also has a web services interface, making it an ideal tool for system administrators who want to set it and forget it in an environment they are already comfortable with. As a Linux user, there were a myriad of tools out there already for integrating with S3, and I was able to find one to aide my with my simple backup automation.

First things first, I run my web stack on a CentOS installation. Different Linux distributions may have slightly different utilities (such as package managers), so these instructions may differ on your system. If you see anything along the way that isn’t exactly how you have things set up, take the time and research how to adapt the processes I have outlined.

In Amazon S3, before you back up anything, you need to create a bucket. A bucket is simply a container that you use to store data objects within S3. After logging into the Amazon Web Services Console, you can configure it using the S3 panel and create a new bucket using the button provided. Buckets can have different price points, naming conventions, or physical locations around the world. It is best to read the documentation provided through Amazon to figure out what works best for you, and then create your bucket. For our purposes, any bucket you can create is treated the same and shouldn’t cause any problems depending on what configuration you wish to proceed with.

After I created my bucket, I stumbled across a tool called s3cmd which allows me to interface directly with my bucket within S3.

To install s3cmd, it was as easy as bringing up my console and entering:

sudo yum install s3cmd

The application will install, easy as that.

Now, we need a secret key and an access key from AWS. To get this, visit https://console.aws.amazon.com/iam/home#security_credential and click the plus icon next to Access Keys (Access Key ID and Secret Access Key). Now, you can click the button that states Create New Access Key to generate your keys. They should display in a pop-up on the page. Leave this pop-up open for the time being.

Back to your console, we need to edit s3cmd’s configuration file using your text editor of choice, located in your user’s home directory:

nano ~/.s3cfg

The file you are editing (.s3cfg) needs both the access key and the secret key from that pop-up you saw earlier on the AWS site. Edit the lines beginning with:

access_key = XXXXXXXXXXXX
secret_key = XXXXXXXXXXXX

Replacing each string of “XXXXXXXXXXXX” with your respective access and secret keys from earlier. Then, save the file (CTRL+X in nano, if you are using it).

Now we are ready to write the script to do the backups. For the sake of playing different languages, I chose to write my script using PHP. You could accomplish the same behavior using Python, Bash, Perl, or other languages, though the syntax will differ substantially. First, our script needs a home, so I created a backup directory to house the script and any local backup files I create within my home directory. Then, I changed into that directory and started editing my script using the commands below:

mkdir backup
cd backup/
nano backup.php

Now, we’re going to add some code to our script. I’ll show an example for backing up one site, though you can easily duplicate and modify the code for multiple site backups. Let’s take things a few lines at a time. The first line starts the file. Anything after <?php is recognized as PHP code. The second line sets our time zone. You should use the time zone of your server’s location. It’ll help us in the next few steps.

<?php
date_default_timezone_set('America/New_York');

So now we dump our site’s database by executing the command mysqldump through PHP. If you don’t run MySQL, you’ll have to modify this line to use your database solution. Replace the username, password, and database name on this line as well. This will allow you to successfully backup the database and timestamp it for reference. The following line will archive and compress your database dump using gzip compression. Feel free to use your favorite compression in place of gzip. The last line will delete the original .sql file using PHP’s unlink, since we only need the compressed one.

exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");

The next line will archive and gzip your site’s web directory. Make sure you check the directory path for your site, you need to know where the site lives on your server.

exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");

Now, an optional line. I didn’t want to keep any web directory backups older than three months. This will delete all web directory backups older than that. You can also duplicate and modify this line to remove the database archives, but mine don’t take up too much space, so I keep them around for easy access.

@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");

Now the fun part. These commands will push the backups of your database and web directory to your S3 bucket. Be sure to replace U62 with your bucket name.

exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");

Finally, end the file, closing that initial <?php tag.

?>

Here it is all put together (in only ten lines!):

<?php
date_default_timezone_set('America/New_York');
exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");
@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");
exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");
?>

Okay, now our script is finalized. You should now save it and run it with the command below in your console to test it out!

php backup.php

Provided you edited all the paths and values properly, your script should push the two files to S3! Give yourself a pat on the back, but don’t celebrate just yet. We don’t want to have to run this on demand every time we want a backup. Luckily we can automate the backup process. Back in your console, run the following command:

crontab -e

This will load up your crontab, allowing you to add jobs to Cron: a time based scheduler. The syntax of Cron commands is out of the scope of this article, but the information is abundant online. You can add the line below to your crontab (pay attention to edit the path of your script) and save it so it will run on the first of every month.

0 0 1 * * /usr/bin/php /home/famicoman/backup/backup.php

… 

 

[WANTED] Let’s Find All The TechTV VHS Tapes

TechTV, the 24-hour technology-oriented cable channel was a never ending source of inspiration to me when I was growing up. Back then, TechTV was only available in my area on digital cable, a newfangled platform that people didn’t want to pay the extra money for. By the time I had ditched analog cable, TechTV was long gone, absorbed into G4, with any programming carried over reduced to a shell of its former self.

Back in the heyday (2001-2002 for this example), TechTV decided to release direct-to-video VHS tapes of various one-off programs and specials designed as something of an informational/instructional reference. I remember these being advertised, and the concept excited me as it was a way to get TechTV content without needing the cable service. That said, I never got to view a single one of these tapes. Unfortunately, they were priced a just a little too high. It was a big financial investment for an hour of content.

Over a decade later, a few of these tapes have made their way online. By my research, I can find that TechTV produced six (Maybe more?) VHS tapes, three of which some kind souls have digitized and put on YouTube or The Internet Archive. But, that means that there are three other tapes out there which I or anyone else have a hard time getting at unless we want to spend a bunch of money working our way through Amazon resellers. Not something I want to do. To add to this, the digitized videos available online are not the best quality. Again, thanks to the kind souls who went through the trouble, but I would really like to see the maximum quality squeezed out of these bad boys. Over the years, there have been many efforts to find old TechTV recordings from over-the-air, but these tapes remain mostly lost.

Let’s try to fix that.

I’ve created a project page to track as much information as I can on these tapes and am looking for any people who can create digital rips themselves or send these tapes my way to rip. I can’t offer any money to buy them, but you’ll be doing a good service getting these videos out there. Once again, I would like to find fresh sources for each of the tapes listed if possible.

The titles I can find existing are as follows, let me know if I missed any:

  • TechTV’s Digital Audio for the Desktop (2001)
  • TechTV’s How to Build Your Own PC (2001)
  • TechTV’s Digital Video for the Desktop (2002)
  • TechTV Solves Your Computer Problems (2002)
  • TechTV’s How to Build a Website (2002)
  • TechTV’s How to Build a Website (2002)
  •  

    ChannelEM and TechTat are now Archived

    Since both TechTat and ChannelEM are essentially no longer updated, I didn’t want to have to worry about maintaining them on the server. I’ve backed up their installations, and created static html versions of each website which are now up at the URLs below.

    http://techtat.static.famicoman.com/

    http://channelem.static.famicoman.com/

    The static sites are not perfect, and may have some missing thumbnails, background images, or pages that were created on-the-fly by applications. Regardless, all the pages and their information should be intact. The original domains should work until I let them expire, but for now they will offer up a redirect to the new static sites. Neither Techtat nor ChannelEM ever got the traction I hoped they would, though they proved to be interesting projects when they were active. ChannelEM in particular, when it worked, worked very well and I would love to apply the approach of an online television station developed there to another project down the road.

    Until then, I’ll narrow my focus a bit, and continue my “Spring Cleaning” as best as I can.