A Ceph+radosgw for Storage for Dev/Test/QA in a Few Minutes

It turns out when I was adding Swift (object storage) support to a project I really wanted to also have Ceph support via radosgw, another object storage system.

It turns out that deploying Ceph is not trivial or easy. However, Juju helps manage the complexity of Big Software.

There are a couple of tricks to running Ceph on LXD.

  1. Specify a full path which does not start with /dev for the osd-devices option.
  2. use-direct-io: false on ZFS.

You can add these options to a bundle and deploy with a single command using a single bundle file like this:

$ cat > ceph-bundle.yaml
services:
 ceph-mon:
 charm: cs:~openstack-charmers-next/xenial/ceph-mon
 num_units: 3
 to:
 - '1'
 - '2'
 - '3'
 ceph-osd:
 charm: cs:~openstack-charmers-next/xenial/ceph-osd
 num_units: 3
 options:
 osd-devices: /srv/ceph-osd
 osd-reformat: 'yes'
 use-direct-io: false
 to:
 - '1'
 - '2'
 - '3'
 ceph-radosgw:
 charm: cs:~openstack-charmers-next/xenial/ceph-radosgw
 num_units: 1
 options:
 use-embedded-webserver: true
 to:
 - 1
relations:
- - ceph-osd:mon
 - ceph-mon:osd
- - ceph-radosgw:mon
 - ceph-mon:radosgw
machines:
 '1':
 constraints: arch=amd64
 series: xenial
 '2':
 constraints: arch=amd64
 series: xenial
 '3':
 constraints: arch=amd64
 series: xenial
^D
$ juju deploy ceph-bundle.yaml

Wait a while. Watch juju status. Then, see if radosgw is up and try s3cmd

$ curl 10.0.5.117
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
$ juju ssh ceph-radosgw/0 'sudo radosgw-admin user create --uid="ubuntu" --display-name="Ubuntu Ceph"'
{
 "user_id": "ubuntu",
 "display_name": "Ubuntu Ceph",
 "email": "",
 "suspended": 0,
 "max_buckets": 1000,
 "auid": 0,
 "subusers": [],
 "keys": [
 {
 "user": "ubuntu",
 "access_key": "O5W6PMIQZ83ODYCGVIGJ",
 "secret_key": "6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA"
 }
 ],
 "swift_keys": [],
 "caps": [],
 "op_mask": "read, write, delete",
 "default_placement": "",
 "placement_tags": [],
 "bucket_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 },
 "user_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 },
 "temp_url_keys": []
}
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 mb s3://testb
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 put ceph.yaml s3://testb/ceph.yaml
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls
2016-11-16 20:23  s3://testb
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls s3://testb
2016-11-16 20:24 2581 s3://testb/ceph.yaml

Dev and Test as you wish.

It should go without saying, but I will write it anyway: do not use Ceph on LXD like this in production. Ceph must be scaled out to different machine nodes.

A Swift for Storage for Dev/Test/QA in 2 Minutes

I was adding swift support to a project and it became apparent that things would be easier if I had a local swift to which I could connect.

I needed to familiarize myself with the API and its behavior and doing this locally rather than over a VPN on some production system with a QA tenant would make things a lot easier.

I could have used Devstack. Devstack is great if you are developing OpenStack. That is for what it is made. It uses the OpenStack source. That seemed overkill to me.

What I ended up with is a cloud-config file which I pass to cloud-init on system start. I use LXD to start a container. In less than 2 minutes later, on my 10 year old home server, I have swift up and running and responding to my commands.

$ lxc launch -e ubuntu:16.04 $(petname) -c user.user-data="$(cat swift.yaml)"
Creating testy-Abril
Starting testy-Abril
$ lxc list
+-----------------+---------+-----------------------+-+-----------+---+
| testy-Abril     | RUNNING | 10.0.5.169 (eth0)     | | EPHEMERAL | 0 |
+-----------------+---------+-----------------------+-+-----------+---+
$ swift --user admin:admin --key
admin -A http://10.0.5.169:8080/auth/v1.0 list
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 upload t README.md
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list
t
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list t
README.md

No need to ssh into the container at all. Just start it, wait a bit for things to install, and a swift API is up and running.

The swift.yaml file is here on github and the only change you should make is either remove the last line or change it to import your key so you can ssh to it.

Some LXD containers on a hidden net, others on your lan

Back in November I wrote about Converting eth0 to br0 and getting all your LXC or LXD onto your LAN

It works, but you might not want ALL of your LXD on your LAN.

You’ll still need your LAN interface to be a br0 instead of a device that isn’t a bridge. Go follow the Bridge your interface section of that post to convert your eth0 to br0.

I’ve fully converted to using LXD. I don’t even remember if LXC supports profiles. I think it does, so I think the same idea could be applied to LXC, but I’m only showing this for LXD.

First, copy the default profile:

lxc profile copy default lanbridge

Second, edit the new profile to use br0 instead of lxdbr0:

lxc profile device set lanbridge eth0 parent br0

Third and finally, start instances with that profile:

lxc launch ubuntu-xenial -p lanbridge

In my case, this instance is on my local lan AND on public ipv6 space (thanks Comcast).

heritable-gale    | RUNNING | 192.168.15.172 (eth0) | 2601:400:8000:5ab3:216:3eff:fe73:d242 (eth0)

 

Cloud-config with LXD

A year ago I wrote http://jrwren.wrenfam.com/blog/2015/05/26/ubuntu-cloud-image-based-containers-with-lxc/

Since then, LXD became the best way to use LXC.

By default, LXD already uses ubuntu-cloudimg images.

The lesser know feature is using cloud-config with LXD. It turns out it is very easy to pass user-data to an LXD instance when you start it, just like you would on any cloud provider.

LXD even has the -e option to make your LXD instance ephemeral. It will be deleted automatically when you stop it.

Just like in that previous blog post, I create a file named one.yaml. The name can be anything. Then i start it:

lxc launch ubuntu:14.04 crisp-Hadley -c user.user-data="$(cat one.yaml)"

That is all there is to it.

Here is an example of config similar to what I used recently to QA a build configuration:

#cloud-config
output:
 all: "|tee -a /tmp/cloud.out"
#hostname: {{ hostname }}
bootcmd:
 - rm -f /etc/dpkg/dpkg.cfg.d/multiarch
apt_sources:
 - source: ppa:yellow/ppa
ssh_import_id: [evarlast] # use -S option
packages:
 - make
final_message: "The system is finally up, after $UPTIME seconds"
runcmd:
 - cd /home/ubuntu
 - git clone https://www.github.com/jrwren/myproject
 - cd myproject
 - make deps run

Converting eth0 to br0 and getting all your LXC or LXD onto your LAN

Wayne has a great post on the new juju lxd work. I’ve been using it a bit and it is awesome. It is super fast and I can create and destroy environments faster than creating and destroying with juju-local.

One thing which I’ve done which has made all LXC and LXD instances more valuable to me, in my home development environment, is to use a bridge to put them directly on my home LAN.

Normally, LXC creates its own device, lxc-br0, which is managed by the lxc-net service. The service creates the device, brings it up, manages the dnsmasq tied to it (which provides DHCP for the 10.0.3.0/24 range).

Bridge your interface

Instead of using lxc-br0, I create a br0. I add my eth0 (and in my case other devices) to that br0. Then I configure LXC and LXD to use br0 instead of lxc-br0. I go as far as stopping the lxc-net service, since I’m not using it.

There is one trick if you are going to do that on a remote home system, e.g. I have an old laptop I leave in the basement and I’m really lazy and I don’t want to walk down there and use its console when I screw up its networking. The trick is to make sure eth0 comes up on br0 when its added there.

Before you do anything, make sure bridge-utils is installed. It probably is if you are already using lxc, but if this is a fresh install, you’ll want to apt-get install bridge-utils

Edit your /etc/network/interfaces and disable eth0 by setting it to manual. Add it to br0 by adding a new br0 section and listing eth0 in bridge-ifaces and bridge-ports.

auto br0
iface br0 inet dhcp
    bridge-ifaces eth0
    bridge-ports eth0
    up ifconfig eth0 up

iface eth0 inet manual

Now run sudo ifup br0. At this point something magical happens, the DHCP lease is renewed but this time the IP address is bound to br0. The magical part is that br0 used the eth0 MAC to make the DHCP request and so you get the same IP address in response and even your SSH session stays open. YAY!

wikipedia network bridge

LXC can use any bridge

Now configure LXC to use this bridge.

apt-get install lxc
sed -i 's/lxc.network.link = lxcbr0/lxc.network.link = br0/' /etc/lxc/default.conf

TADA, now any LXC containers you start with lxc-start will use br0 and get an address from your household DHCP server. They will be accessible from any host in your home.

Now what about LXD?

LXD can use any bridge

It turns out, while LXD is a layer on top of LXC, it doesn’t use /etc/lxc/default.conf for its default config, but instead uses its own settings. These are editable with lxc profile edit default. Change the lxcbr0 in your editor and save and exit. You can check that it is correct by using lxc profile show default.

There you have it. LXD instances starting on your local LAN.

Now go read Wayne’s post again and use the Juju LXD provider.