There is currently no way to build a server on one account from an image on another account, at least not directly. This article will show you how to do it manually using a Cloud Files image.
This only covers Linux servers, Windows users get no love! (Actually, once I’ve had a chance to test on Windows servers I’ll update this article as to whether it’s possible and what steps are required to do it.)
Getting your API authentication token
Note that we’ll be using the wget utility in this article as it is installed on most distributions out of the box. All the cool kids use curl, but we’re just not cool enough.
For this step, you’ll need the API key from your Rackspace control panel, which can be found under Your Account->API Access. Note: This is the username and API key from the account that has the server images, not the one you want to build a new server on, if you are building to a different account.
Run the following command, substituting in your username and API key:
wget -O- -q --no-check-certificate -S \ --header "X-Auth-User: USERNAME" \ --header "X-Auth-Key: APIKEY" https://auth.api.rackspacecloud.com/v1.0
This should output some headers. The three you will need for the next steps are X-Auth-Token, X-Storage-Url, and X-Server-Management-Url.
Finding your backup image
All of your images are stored in a container called “cloudservers”. It’s relatively easy to find the image you want to build from, you can simply list the images in the container, looking for the one that matches the name of the image you saved. You’ll need to substitute the X-Auth-Token and X-Storage-Url in the following command:
wget -O- -q --no-check-certificate \ --header "X-Auth-Token: TOKEN" https://STORAGEURL/cloudservers
This can get a bit confusing if you have images with the same name, or scheduled images where they are named “daily” and “weekly”. The second part of the image name is always the date it was created on and the fourth part is the Cloud Server ID.
What if you have two images prefixed with “daily” and created on the same date? You can do a listing of your Cloud Servers to determine which one has the ID matching the image. This time you need to substitute the X-Auth-Token and the X-Server-Management-Url.
wget -O- -q --no-check-certificate \ --header "X-Auth-Token: TOKEN" https://SERVERURL/servers | \ awk -F ':\\[' '{gsub("},{", "}\n{", $2); \ gsub(/[{}"\]]/, "", $2); \ gsub(",", ", ", $2); print $2}'
If this still isn’t enough to distinguish the images, the third part of the image name is the image ID, which is sequential, so larger image ID’s correlate to images that were created later in time.
Sidebar: Cloud Servers saves images as a set of objects on Cloud Files. There are at least two files. A .yml (said: yamel, like camel) and a .tar.gz.0. The.yml is a ruby serialization format that holds metadata about the image. The .tar.gz.n files are the actual image data, split on 5GB boundaries. The whole reason for this article is that you cannot simply create a new server on one account, create an image, then overwrite the image with one from another account, because both the .yml and .tar.gz.n files are checksummed for each image, and if the contents don’t match the checksum, the build will fail — trust me, I’ve tried.
Preparing to build a new server from the image
We’re going to cheat a little bit here. We’re going to assume the new server has enough disk space to hold the downloaded image from Cloud Files as well as the uncompressed data from the image. There are two reasons for this assumption:
- Cloud Servers give you a decent amount of disk space for each size server and most people don’t use more than half of it.
- I’m lazy, and unless someone requests it, I don’t want to write the guide on how to spin up a new server simply for extracting the image and then rsyncing it over to the target server.
But how do you determine how large the image is? Shell hackery of course! You could just look in your control panel under Hosting->Cloud Files, but we’re Linux users, darn it! As usual, substitute the needed values from the headers.
files=`wget -O- -q --no-check-certificate \ --header "X-Auth-Token: TOKEN" \ https://STORAGEURL/cloudservers?prefix=UNIQUEPREFIX | \ grep tar.gz`
Aha! Throw you for a loop? What’s that “prefix” query string? You’ll want to substitute the longest unique prefix for the image you’re after. For example, “prefix=archdaily20101010T062504_20101010_112405”.
Now that you have a list of the data files for the image you just need to loop through and do a HEAD request on each object, tallying the results from the Content-Length headers. Again, substitute the needed values from the headers.
for file in $files ; do wget -O- -q -S --spider \ --no-check-certificate --header "X-Auth-Token: TOKEN" \ https://STORAGEURL/cloudservers/$file 2>&1 | \ awk '/Content-Length/ {print $2}' ; done | \ awk '{total+=$1} END {print total/2**30}'
This will give you the total, in gigabytes, of the image data. You’ll need a server with approximately double as much free space as this total to download and unpack the image.
Building a new server from the image
Now we’re down to the brass tacks. Go ahead and build a new server, of the same flavor (distribution) and with enough disk space to hold and extract the image (see: here for a list of disk space). You can do this using the API directly, but that’s for another article. For now just use your control panel on the target account to create the server. Once you’ve created the server, login as the root user via ssh or Putty. The rest of this article will be commands that must be run on the target server.
First you need to create an archive backup of the /etc directory. This is crucial. If you skip this step, things will break.
cp -a /etc /etc.bak
Now you need to download the image files. Again, substitute the needed values from the headers acquired earlier.
files=`wget -O- -q --no-check-certificate \ --header "X-Auth-Token: TOKEN" \ https://STORAGEURL/cloudservers?prefix=UNIQUEPREFIX | \ grep tar.gz`
for file in $files ; do wget --no-check-certificate \ --header "X-Auth-Token: TOKEN" \ https://STORAGEURL/cloudservers/$file ; done
If your image has more than one .tar.gz file, you’ll need to merge them back into one file. This can be accomplished via the cat
command. For the sake of uniformity, we’ll do this even if there is only one file in the image.
cat UNIQUEPREFIX* > image.tar.gz rm UNIQUEPREFIX*
Now it’s just a matter of extracting the image to the root filesystem.
tar --strip-components=2 --hard-dereference -xpf image.tar.gz -C /
Important: If your tar doesn’t support –hard-dereference you will need to build tar from source or download a statically-compiled tar binary. You can try it without the flag, but tar may complain and the extraction may fail.
Note that the –strip-components=2 is needed, because the image is stored in the archive under ./image, so we need to strip the “.” and the “image” components.
Now we need to restore the archived /etc directory and remove the image tarball.
cp -a /etc.bak/* /etc/ rm -rf /etc.bak image.tar.gz
Now reboot the target server, and that’s it. Congratulations! You’ve just built a new Rackspace Cloud Server from a backup image!
« Server Security: Configuring PPP-Pam Setting up PV-Grub on your cloud servers. »
Hi, I’ve been trying to get past the point of untarring the image into the system root, but keep hitting this::
gzip: stdin: decompression OK, trailing garbage ignored
tar: Child returned status 2
tar: Exiting with failure status due to previous errors
Any suggestions for troubleshooting? It seems like gzip is treating the yaml file as garbage because it’s not compressed.
Thanks for the great post. I have been using this method to setup some images, and I discovered that the full copy of the /etc directory overwrote too many system settings that I depended upon. Following un-tar of the image, I perform these commands, which makes for a more true re-image (some of these may be specific to my system configuration and centos 5.5):
mv /etc /etc.image
cp -a /etc.bak /etc
\cp -f /etc.image/group* /etc
\cp -f /etc.image/gshadow* /etc
\cp -f /etc.image/shadow /etc
\cp -f /etc.image/sudoers /etc
\cp -f /etc.image/passwd* /etc
Here’s a script that may help with extracting images: http://pastie.org/2299771
You may be able to also go over servicenet in the same datacenter if you s/storage101/snet-storage101 on line #27, avoiding bandwidth charges from both Cloud Files and Servers. You give it your image’s YML filename as an argument and it extracts the tar into the CWD.
Hey guys, really nice post and useful at this moment for me. We’ve contacted with rackspace team for this matter and they told us to look into this blog or use rsync.
So, since you’re getting an image from un server and you extract that image in a new one. At the end is pretty much the same as using a rsync (with the difference that with an rsync you’re getting the live server)
That’s the only reason to prefer this way instead of rsync? I’m really curious about that before we and the team move forward on this.
Martin,
This method will generally be faster then rsyncing (file level transfer vs downloading the tar and extracting). The speed is the main factor, as you’ll need to put the target server in rescue in order to overwrite some of the files, and it’s cleaner either way. Since rescue mode has a time limit, downloading the tar and extracting will generally be the better option.
@Riuujinx Thanks!