A Ceph+radosgw for Storage for Dev/Test/QA in a Few Minutes

It turns out when I was adding Swift (object storage) support to a project I really wanted to also have Ceph support via radosgw, another object storage system.

It turns out that deploying Ceph is not trivial or easy. However, Juju helps manage the complexity of Big Software.

There are a couple of tricks to running Ceph on LXD.

  1. Specify a full path which does not start with /dev for the osd-devices option.
  2. use-direct-io: false on ZFS.

You can add these options to a bundle and deploy with a single command using a single bundle file like this:

$ cat > ceph-bundle.yaml
services:
 ceph-mon:
 charm: cs:~openstack-charmers-next/xenial/ceph-mon
 num_units: 3
 to:
 - '1'
 - '2'
 - '3'
 ceph-osd:
 charm: cs:~openstack-charmers-next/xenial/ceph-osd
 num_units: 3
 options:
 osd-devices: /srv/ceph-osd
 osd-reformat: 'yes'
 use-direct-io: false
 to:
 - '1'
 - '2'
 - '3'
 ceph-radosgw:
 charm: cs:~openstack-charmers-next/xenial/ceph-radosgw
 num_units: 1
 options:
 use-embedded-webserver: true
 to:
 - 1
relations:
- - ceph-osd:mon
 - ceph-mon:osd
- - ceph-radosgw:mon
 - ceph-mon:radosgw
machines:
 '1':
 constraints: arch=amd64
 series: xenial
 '2':
 constraints: arch=amd64
 series: xenial
 '3':
 constraints: arch=amd64
 series: xenial
^D
$ juju deploy ceph-bundle.yaml

Wait a while. Watch juju status. Then, see if radosgw is up and try s3cmd

$ curl 10.0.5.117
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
$ juju ssh ceph-radosgw/0 'sudo radosgw-admin user create --uid="ubuntu" --display-name="Ubuntu Ceph"'
{
 "user_id": "ubuntu",
 "display_name": "Ubuntu Ceph",
 "email": "",
 "suspended": 0,
 "max_buckets": 1000,
 "auid": 0,
 "subusers": [],
 "keys": [
 {
 "user": "ubuntu",
 "access_key": "O5W6PMIQZ83ODYCGVIGJ",
 "secret_key": "6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA"
 }
 ],
 "swift_keys": [],
 "caps": [],
 "op_mask": "read, write, delete",
 "default_placement": "",
 "placement_tags": [],
 "bucket_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 },
 "user_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 },
 "temp_url_keys": []
}
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 mb s3://testb
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 put ceph.yaml s3://testb/ceph.yaml
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls
2016-11-16 20:23  s3://testb
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls s3://testb
2016-11-16 20:24 2581 s3://testb/ceph.yaml

Dev and Test as you wish.

It should go without saying, but I will write it anyway: do not use Ceph on LXD like this in production. Ceph must be scaled out to different machine nodes.