I Wrote An App

I’ve been putting off this post for a while. Not for any reason in particular, I just like to have things arranged in a certain way before I push them out to people.

This is analogous to the mobile app project this post refers to as a whole. In 2013, with the idea of a friend, I created a mobile application that allows a user to send a random insulting text to someone on their contacts list. It was for fun of course, and we called it BitchyTexts. It was (and still is) Android-only, and was developed over the course of a few weeks on the little time I had between classes. I distributed it to my friends, who distributed it to their friends, and the results were mostly positive. It was crude, and thrown together, but it worked and did its job well.

The next logical step of course was a Play Store release. However, I needed to clean my code up, get things under version control. and brave the submission process. I worked a little here and there, but ultimately getting the app out the door fell to the bottom of my priority list. In late 2015, two years after I decided I wanted to do a Play Store release, I picked development back up again and started knocking out little pieces here and there to reach my desired outcome.

This became one of my 2016 goals, and I was chomping at the bit to release something. There was no use sitting on it, store releases are an iterative process and I could always improve here and there after the application was live.

So, I submitted it. It was approved, and it’s out there for anyone to download and use. There are changes I want to make, and there are other things I want to work on for it (An improved website, back-end services, etc.) but those can come at any time. There is a lot of planning to do, but nothing too crazy.

BitchyTexts in action!

BitchyTexts in action!

Check it out here, https://play.google.com/store/apps/details?id=com.bt.bitchytexts

Let me know what you think!

 

irssi-hilighttxt.pl – An irssi Plugin That SMS Messages You On Hilight

A few months ago after configuring irssi with all the IRC channels I wanted, I ran into the problem of being late to a conversation. Every few days I would check my channels only to see people reaching out to me when I wasn’t around. Sometimes I was able to ping someone to talk, other times the person left and never came back.

I had been using the faithful hilightwin.pl plugin to put all my hilights in a separate window I could monitor. I figured that with my limited knowledge of perl I could rig up something to send me an SMS text message instead of writing the hilight line to a different window in irssi where i may not get to it in time.

Using TextBelt’s free API, I was able to call a curl command from inside perl to send the message triggering my hilight to my mobile phone. It isn’t perfect, as there is some garbled text at the front of the message, but I get the message quickly and I can see not only who sends it but also the channel they are in.

Sensible text messages delivered!

Sensible text messages delivered!

I’ve put the code up on GitHub for anyone to use or improve upon. TextBelt’s API is a little limited in how many messages you can receive in a short period of time (as it should be to prevent abuse) and doesn’t support many carriers outside of the USA, so there is definitely room for improvement if another suitable API was found.

Check it out and let me know what you think!

 

rtmbot-archivebotjr – A Slack Bot for Archiving

I’ve been working with the idea of trying to archive more things when I’m on the go. Sometimes I find myself with odd pockets of time like 10 minutes on a train platform or a few minutes leftover at lunch that I tend to spend browsing online. Inevitably, I find something I want to download later and tuck the link away, usually forgetting all about it.

Recently, I’ve been using Slack for some team collaboration projects (Slack is sort of like IRC in a nice pretty package, integrating with helpful online services) and was wondering how I could leverage it for some on-the-go archiving needs.

Slack has released their own bot, python-rtmbot on GitHub that you can run on your own server and pull into your Slack site to do bot things. The bot includes a few sample plugins (written in Python), but I went about creating my own to get some remote archiving features and scratch my itch.

The fruit of my labor also lives on GitHub as rtmbot-archivebotjr. This is not to be confused with Archive Team’s ArchiveBot (I just stink at unique names). archivebotjr will sit in your Slack channels waiting for you to give it a command. The most useful are likely !youtube-dl (for downloading youtube videos in the highest quality), !wget (for downloading things through wget. Great when I find a disk image and don’t want to download it on my phone), and !torsocks-wget (Like !wget but over TOR). I have a few more in there for diagnostics (!ping and !uptime), but you can see a whole list on the GitHub page.

Screenshot_2016-02-25-09-55-50

Right now, the bot is basic and lacks a wide array of features. The possibilities for other tools that can link into this are endless, and I hope to link more in periodically. Either way, you can easily download all sorts of files relatively easily and the bot seems reasonably stable for an initial release.

If you can fit this bot into your archiving workflow, try it out and let me know how it goes. Can it better fit your needs? Is something broken? Do you want to add a feature?

I want to hear about it!

 

Automating Site Backups with Amazon S3 and PHP

This article was originally written for and published at TechOats on June 24th, 2015. It has been posted here for safe keeping.

BackItUpWithBHPandS3

I host quite a few websites. Not a lot, but enough that the thought of manually backing them up at any regular interval fills me with dread. If you’re going to do something more than three times, it is worth the effort of scripting it. A while back I got a free trial of Amazon’s Web Services, and decided to give S3 a try. Amazon S3 (standing for Simple Storage Service) allows users to store data, and pay only for the space used as opposed to a flat rate for an arbitrary amount of disk space. S3 is also scalable; you never have to worry about running out of a storage allotment, you get more space automatically.

S3 also has a web services interface, making it an ideal tool for system administrators who want to set it and forget it in an environment they are already comfortable with. As a Linux user, there were a myriad of tools out there already for integrating with S3, and I was able to find one to aide my with my simple backup automation.

First things first, I run my web stack on a CentOS installation. Different Linux distributions may have slightly different utilities (such as package managers), so these instructions may differ on your system. If you see anything along the way that isn’t exactly how you have things set up, take the time and research how to adapt the processes I have outlined.

In Amazon S3, before you back up anything, you need to create a bucket. A bucket is simply a container that you use to store data objects within S3. After logging into the Amazon Web Services Console, you can configure it using the S3 panel and create a new bucket using the button provided. Buckets can have different price points, naming conventions, or physical locations around the world. It is best to read the documentation provided through Amazon to figure out what works best for you, and then create your bucket. For our purposes, any bucket you can create is treated the same and shouldn’t cause any problems depending on what configuration you wish to proceed with.

After I created my bucket, I stumbled across a tool called s3cmd which allows me to interface directly with my bucket within S3.

To install s3cmd, it was as easy as bringing up my console and entering:

sudo yum install s3cmd

The application will install, easy as that.

Now, we need a secret key and an access key from AWS. To get this, visit https://console.aws.amazon.com/iam/home#security_credential and click the plus icon next to Access Keys (Access Key ID and Secret Access Key). Now, you can click the button that states Create New Access Key to generate your keys. They should display in a pop-up on the page. Leave this pop-up open for the time being.

Back to your console, we need to edit s3cmd’s configuration file using your text editor of choice, located in your user’s home directory:

nano ~/.s3cfg

The file you are editing (.s3cfg) needs both the access key and the secret key from that pop-up you saw earlier on the AWS site. Edit the lines beginning with:

access_key = XXXXXXXXXXXX
secret_key = XXXXXXXXXXXX

Replacing each string of “XXXXXXXXXXXX” with your respective access and secret keys from earlier. Then, save the file (CTRL+X in nano, if you are using it).

Now we are ready to write the script to do the backups. For the sake of playing different languages, I chose to write my script using PHP. You could accomplish the same behavior using Python, Bash, Perl, or other languages, though the syntax will differ substantially. First, our script needs a home, so I created a backup directory to house the script and any local backup files I create within my home directory. Then, I changed into that directory and started editing my script using the commands below:

mkdir backup
cd backup/
nano backup.php

Now, we’re going to add some code to our script. I’ll show an example for backing up one site, though you can easily duplicate and modify the code for multiple site backups. Let’s take things a few lines at a time. The first line starts the file. Anything after <?php is recognized as PHP code. The second line sets our time zone. You should use the time zone of your server’s location. It’ll help us in the next few steps.

<?php
date_default_timezone_set('America/New_York');

So now we dump our site’s database by executing the command mysqldump through PHP. If you don’t run MySQL, you’ll have to modify this line to use your database solution. Replace the username, password, and database name on this line as well. This will allow you to successfully backup the database and timestamp it for reference. The following line will archive and compress your database dump using gzip compression. Feel free to use your favorite compression in place of gzip. The last line will delete the original .sql file using PHP’s unlink, since we only need the compressed one.

exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");

The next line will archive and gzip your site’s web directory. Make sure you check the directory path for your site, you need to know where the site lives on your server.

exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");

Now, an optional line. I didn’t want to keep any web directory backups older than three months. This will delete all web directory backups older than that. You can also duplicate and modify this line to remove the database archives, but mine don’t take up too much space, so I keep them around for easy access.

@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");

Now the fun part. These commands will push the backups of your database and web directory to your S3 bucket. Be sure to replace U62 with your bucket name.

exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");

Finally, end the file, closing that initial <?php tag.

?>

Here it is all put together (in only ten lines!):

<?php
date_default_timezone_set('America/New_York');
exec("mysqldump -uUSERNAMEHERE -pPASSWORDHERE DATABASENAMEHERE > ~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz ~/backup/sitex.com-".date('Y-m-d').".sql");
unlink("~/backup/sitex.com-".date('Y-m-d').".sql");
exec("tar -zcvf ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz /var/www/public_html/sitex.com");
@unlink("~/backup/sitex.com-".date('Y-m-d', strtotime("now -3 month")).".tar.gz");
exec("s3cmd -v put ~/backup/sitex.com-db-".date('Y-m-d').".tar.gz s3://U62");
exec("s3cmd -v put ~/backup/sitex.com-dir-".date('Y-m-d').".tar.gz s3://U62");
?>

Okay, now our script is finalized. You should now save it and run it with the command below in your console to test it out!

php backup.php

Provided you edited all the paths and values properly, your script should push the two files to S3! Give yourself a pat on the back, but don’t celebrate just yet. We don’t want to have to run this on demand every time we want a backup. Luckily we can automate the backup process. Back in your console, run the following command:

crontab -e

This will load up your crontab, allowing you to add jobs to Cron: a time based scheduler. The syntax of Cron commands is out of the scope of this article, but the information is abundant online. You can add the line below to your crontab (pay attention to edit the path of your script) and save it so it will run on the first of every month.

0 0 1 * * /usr/bin/php /home/famicoman/backup/backup.php

… 

 

Programs from High School

I’ve taken some time over the past two days to dig through some of my old flash drives for old programs I wrote in high school. I found most of my flash drives, and while a few had been re-purposed over the years, I ended up finding a lot of content I created over the course of my pre-college schooling.

I didn’t find everything. When I started taking electives in high school, I first enrolled in a web design class. This was basic Photoshop, HTML, and Dreamweaver development. I can’t really find any of this stuff anymore. I also took a CAD class where I used AutoCad, Inventor, and Revit. These files look to be gone as well. More notably, I took a series of programming-heavy classes: Introduction to Programming (C++), Advanced Programming (C++), AP Computer Science (Java), Advanced Programming (Java), and Introduction to Video Game Design (Games Factory and DarkBasic).

Even when I took these classes a little over five years ago, the curriculum was outdated. DarkBasic was never really a mainstream programming language, and Games Factory was more event mapping than it ever was programming. We learned C++ using a 1996 version of Visual Studio and only learned the concepts of object oriented design later in the advanced class. The Java curriculum was brand new to the school when I took it, but still felt outdated.

That said, I learned a hell of a lot here that would lay a foundation for my future education and career path.

I took the time to copy these files from my flash drives and push them to a GitHub group I created for people who took these same classes as I did. The hope here is that the people I had class with will ultimately share and submit their creations, and we can browse these early samples of our programming. Unfortunately, I couldn’t find any of my Java programs yet, but I might be able to come up with more places to look.

Just digging through my source code, I’ve already found a lot of weird errors and places where dozens of lines of code could be replaced with one or two.

It’s interesting to look back on these programs, and maybe they’ll give you a laugh if you care to check them out. I know they gave me one.