I always have several different projects on the go at any one time, and recently this has meant that I have been setting up and tearing down remote virtual machines frequently in order to try out different system configurations and run experiments. As you might imagine, I got well and truly sick of slogging through a growing list of manual operations to get each fresh remote system installed and set up before I could even start the real work. So I decided to look into the technologies available for remote automated system management.
I started by just gathering my hand-written instructions into an executable script. For some parts of the setup(running
apt-get update is hardly rocket science, for example) this was easy, but I quickly found that many of the commands I would type manually required user input, or complex edits to config files, or might only need doing in some situations. I toyed with the idea of pushing forward with my scripting approach and trying to program all these special operations, but decided that this would probably take a lot longer to do than the projects I was supposed to be doing.
Looking a bit further I found a cluster of remote administration tools, including chef, puppet, saltstack, cfengine, ansible and so on. I spent several days trying to get things working using some of these, and following all sorts of different tutorials. In pretty much every case something either did not work, or trying to find out how to accomplish what I wanted took way too long. It didn’t help that typical tutorials for these systems make all sorts of assumptions about the system being used to run the tool – particular operating system versions with particular software installed, and so on.
Eventually I did get something working using Chef, but only by abandoning the official chef tutorials (which seem more concerned with getting you to register with the company than actually helping with real tasks), and trying other stuff until I got something working. The most useful tutorial turned out to be Daniel O’Connor‘s blog post on Managing Raspberry Pi with Chef & Bitbucket. This gave me what I needed: how to get started with an absolute minimum of dependencies.
I had had so many problems and aborted attempts that I could no longer trust the state of my development machine, so I went through the tedious process of setting up a completely clean virtual machine to use. In this case a VirtualBox machine, installed from a CD image of Linux Mint 15. Once I had worked through all the installation screens and could log in to the machine, I was able to install a few things (ruby, bundler, git and so on) and follow Daniel’s steps to get the remote machine up and running. Flushed with success I checked stuff into my git repo and went home.
The next day I was somewhat disappointed. I’m guessing that I did not shut down the virtual machine properly, as it would not start up and I had to go back to an earlier snapshot. Luckily I was able to get things back in operation relatively quickly, but it did make me think. Perhaps setting up a development box in a virtual machine on a desktop computer is both overkill for this task, and not very portable. Inspired by the Xtravirt vPi, I decided instead to set up a Raspberry Pi image with the tools I need, so I can manage my remote machines from anywhere with an ethernet cable and some USB power, and without any worries about incompatibilities or missing software.
The installation process is as follows:
Start with a fresh Raspbian image, and write it to an SD card as usual. When it is written, stick it in a Raspberry Pi, plug in an ethernet cable to a nearby router, and apply some power. Don’t bother with screen and keyboard etc. This is designed to be a “headless” device controlled from another computer. You will need something which can make an ssh connection. If you don’t already have one, then I can highly recommend MobaXterm. Depending on how your local network is configured you can probably ssh to something like email@example.com, but you may have to use the IP address instead. Once you are connected, the real stuff begins.
First, make sure we have the basic software needed:
sudo apt-get update sudo apt-get upgrade sudo apt-get install ruby1.9.3 ruby1.9.1-dev rubygems bundler rake
Now we can create a skeleton installation. Note that although this is based on Daniel’s instructions, it’s not quite the same, as in this case we are using the Pi to manage a remote system, rather than using a local machine to manage the Pi.
mkdir skeleton cd skeleton # Set up bundler to manage Ruby dependencies bundle init echo 'gem "knife-solo"' >> Gemfile echo 'gem "berkshelf"' >> Gemfile bundle # Initialise the Chef tools "Knife" and "Berkshelf" knife solo init . berks init (select "y" to overwrite ".gitignore"; select "n" to overwrite "Gemfile")
At this point we now have an empty (but hopefully working) Chef installation. To use it requires a few extra steps, but I have found that putting them into a simple script makes things a lot easier to remember. I saved the following as
#!/bin/bash # usage rebuild.sh
dest=$1 #remove any old key for that ip ssh-keygen -f "$HOME/.ssh/known_hosts" -R $dest #copy my credentials to allow smooth login ssh-copy-id root@$dest # and enter root password, just this once #install chef on target knife solo prepare root@$dest #run chef to install the configired recipes from nides/$dest.json bundle exec knife solo cook -VV root@$dest
running this script (with the ip address of the sever to be configured as a parameter) should connect to the remote system, install ssh credentials, then install the server end of Chef. You will probably need to enter the server root password the first time you do this.
That’s about it for this post. Next time, I’ll explore how to configure the Chef installation on the Raspberry Pi so that it will deploy the set of services you need to the remote machine.