Automatic Server Backups using SpiderOak

Posted by Jesse on July 29, 2013

I must admit, I'm a bit of paranoid when it comes to data loss.  When I was a kid, my hard drive crashed and I lost everything I ever had up until that point.  I've been a Mozy subscriber for years on all of our desktops and laptops.  On the server side of things, however, I've been completely lacking -- until now.

For years, I've been crossing my fingers for an easy Mozy solution, but that still hasn't panned out.  I gave Dropbox and its clones a try, which tend to work, but I hate having all my backup data synced across all computers.  I'm also not a fan of Dropbox's lack of encryption.

I decided I'd give SpiderOak a try with the sole purpose of backing up my Debian servers, and wow, I'm impressed and I'm going to stick with it.  

Here's how I did it:

First, I ignored the install instructions on the SpiderOak website for debian.  I found their deb package didn't do everything it was supposed to do (like add the repo to the sources.list file).

Instead, I added the line below myself to /etc/apt/sources

deb stable non-free

I also had to trust the repository

gpg --keyserver --recv-key 5D654504F1A41D5E
gpg -a --export 5D654504F1A41D5E | sudo apt-key add -
Next, install spideroak as you would any debian package
apt-get update
apt-get install spideroak
And run the setup wizard as root (since backups will be run as root)
SpiderOak --setup=-
rm -rf SpiderOak\ Hive
Notice  that I removed the SpiderOak Hive directory.  Hive is SpiderOak's implementation of multi-computer syncing, but I don't really care about that.
After that, I put the following script (modified with the proper directories of course) into /etc/cron.daily. 
  # directories
  #     to
  #   backup
# Cleanup
SpiderOak --empty-garbage-bin
cmd="SpiderOak --backup="
echo "Starting at $date" >> $log_file
for backup_loc in "${backups[@]}"
  echo "Backing up: $backup_loc" >> $log_file
  if [ -d "$backup_loc" ]; then
    echo "Backing up directory $backup_loc ... " >> $log_file
    $cmd$backup_loc >> $log_file
  elif [ -f "$backup_loc" ]; then
    echo "Backing up file $backup_loc ... " >> $log_file
    $cmd$backup_loc >> $log_file
    echo "Error! $backup_loc could not be found." >> $log_file
echo "Done
" >> $log_file

That's it!  Now I've got daily automated backups of my servers.

0 Comments Read full post »

Software in the Public Domain

Posted by Jesse on February 09, 2013

I've been working on a project, and have decided to myself that I would make an effort to release some common utilities for the world to use.  When I have done this in the past, I picked an Open Source license almost randomly and just gone with it.  This time, I wanted to do it right.

My goal is for the license (or lack of license) to allow my code to be used anywhere.  I don't care if someone copies and pastes it into their own file without crediting me, nor do I care if it gets packaged in some proprietary program never to come out.  It would also be nice to have some protection against liability.

I primarily narrowed it down to three options:

  1. Expat (MIT)

  2. CC0 (Public Domain)

  3. WTFPL

Of the three listed, Expat, more commonly known as the MIT license, is by far the most common.  It's the only one on the list that is approved by Codehaus for hosting, and most developers will see a project licensed under it and not think twice about using it.

WTFPL is an interesting one.  It's short, sweet, and to the point.  I'm not a lawyer, but it sounds like the substance is similar to the Expat license without the liability protection.  I do, however, see it as more restrictive in use, even if it is not a legal restriction, due to its wording.  Personally, I know of a few people who would choose not to use code licensed under it just due to the language.

That leaves CC0.  The way CC0 works is that it attempts to dedicate the work to the public domain.  If the dedication fails (which can happen in various jurisdictions due to Public Domain laws), it in turn becomes a license for unrestricted use.  It also provides a liability waiver, and has a very easy to read summary.

Of the three I found, CC0 offers the best match to what I'm looking for.  So what's the problem?  After scouring the internet for advice, I came across an Oggcast talking about releasing code in the public domain and the problems associated with doing so, mostly stemming from road blocks with the public domain. An example provided was a project under the Apache license turning down CC0 contributions because the project team wasn't sure if the licenses were compatible (really!?).  

The other issue is liability.  The Oggcast mentioned that there is no legal precedence for a liability waiver for something in the public domain. Since the work is in the public domain, there is no requirement for associated documentation to be included, and therefore the liability waiver could be lost along the way.

There are a few other concerns that come up along the way with project hosting as well. Hosting code on Google Code or SourceForge requires an OSI approved license, of which the MIT license is the only option of the one's presented. Neither of which have support for projects in the Public Domain. Codehaus (listed above)specifically prohibits Public Domain projects.

For the reasons above, I feel I am forced to release under the Expat/MIT license, at least for the time being. My personal requirements align with CC0 the best, but the legal and social issues surrounding the Public Domain are just too numerous.

1 Comment Read full post »


Posted by Jesse on December 19, 2012

I just realized my project links aren't working.  I'm not sure if this is an artifact of a CMS update or my recent migration to Hostgator, but I will try to get to the bottom of it.  In the mean time, check out

0 Comments Read full post »

Qwest (CenturyLink) Q1000 Hidden Settings

Posted by Jesse on November 27, 2011

I recently upgraded my DSL service at home, and (un)fortunately, it's faster than my old Linksys router could handle.  I really miss the Tomato firmware I had flashed on that, but have managed to come accross a hidden administration page on the new Q1000 that covers a large majority of things that I thought I lost.

Check out the (or whatever your modem's IP is).

Most Important to me are the DHCP reservations and Dynamic DNS.  There's also a QOS page from here, but I haven't spent any time playing with it.

0 Comments Read full post »

Projects Updated

Posted by Jesse on November 27, 2011

The project page has been updated so projects now point to my hosting on  Additionally, there is a new link for "Jenkins - CI" - This is my (currently empty) continuous integration server.

0 Comments Read full post »

Project Pages

Posted by Jesse on March 14, 2011

The project pages for my Trac hosted projects are broken.  I migrated to a different issue tracking system and will be updating their corresponding pages over the next few weeks.

0 Comments Read full post »


Posted by Jesse on January 22, 2011

Welcome to the new design, now hosted with Concrete5 CMS.  Everything has been migrated over from Joomla, so enjoy!

1 Comment Read full post »

New Domain

Posted by Jesse on January 14, 2011

I got a new domain for my home server -

1 Comment Read full post »

Server Watchdog

Posted by Jesse on December 14, 2010

This is a followup to this post.  It seems there's a more graceful option to reboot the server when there are problems.

These instructions are for debian

1 Comment Read full post »

Bash Restart on Network Failure

Posted by Jesse on December 13, 2010


Here's a bash script I'm using on my server to restart the server in the event network goes down.  I set this up because during a large file transfer through Samba, the server's network stopped responding.  To be on the safe side, i'm doing a full restart.

2 Comments Read full post »