It turns out that deploying Ceph is not trivial or easy. However, Juju helps manage the complexity of Big Software.
There are a couple of tricks to running Ceph on LXD.
You can add these options to a bundle and deploy with a single command using a single bundle file like this:
$ cat > ceph-bundle.yaml services: ceph-mon: charm: cs:~openstack-charmers-next/xenial/ceph-mon num_units: 3 to: - '1' - '2' - '3' ceph-osd: charm: cs:~openstack-charmers-next/xenial/ceph-osd num_units: 3 options: osd-devices: /srv/ceph-osd osd-reformat: 'yes' use-direct-io: false to: - '1' - '2' - '3' ceph-radosgw: charm: cs:~openstack-charmers-next/xenial/ceph-radosgw num_units: 1 options: use-embedded-webserver: true to: - 1 relations: - - ceph-osd:mon - ceph-mon:osd - - ceph-radosgw:mon - ceph-mon:radosgw machines: '1': constraints: arch=amd64 series: xenial '2': constraints: arch=amd64 series: xenial '3': constraints: arch=amd64 series: xenial ^D $ juju deploy ceph-bundle.yaml
Wait a while. Watch juju status. Then, see if radosgw is up and try s3cmd
$ curl 10.0.5.117 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult> $ juju ssh ceph-radosgw/0 'sudo radosgw-admin user create --uid="ubuntu" --display-name="Ubuntu Ceph"' { "user_id": "ubuntu", "display_name": "Ubuntu Ceph", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "ubuntu", "access_key": "O5W6PMIQZ83ODYCGVIGJ", "secret_key": "6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "temp_url_keys": [] } $ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 mb s3://testb $ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 put ceph.yaml s3://testb/ceph.yaml $ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls 2016-11-16 20:23 s3://testb $ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls s3://testb 2016-11-16 20:24 2581 s3://testb/ceph.yaml
Dev and Test as you wish.
It should go without saying, but I will write it anyway: do not use Ceph on LXD like this in production. Ceph must be scaled out to different machine nodes.
]]>I needed to familiarize myself with the API and its behavior and doing this locally rather than over a VPN on some production system with a QA tenant would make things a lot easier.
I could have used Devstack. Devstack is great if you are developing OpenStack. That is for what it is made. It uses the OpenStack source. That seemed overkill to me.
What I ended up with is a cloud-config file which I pass to cloud-init on system start. I use LXD to start a container. In less than 2 minutes later, on my 10 year old home server, I have swift up and running and responding to my commands.
$ lxc launch -e ubuntu:16.04 $(petname) -c user.user-data="$(cat swift.yaml)" Creating testy-Abril Starting testy-Abril $ lxc list +-----------------+---------+-----------------------+-+-----------+---+ | testy-Abril | RUNNING | 10.0.5.169 (eth0) | | EPHEMERAL | 0 | +-----------------+---------+-----------------------+-+-----------+---+ $ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list $ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list $ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 upload t README.md $ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list t $ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list t README.md
No need to ssh into the container at all. Just start it, wait a bit for things to install, and a swift API is up and running.
The swift.yaml file is here on github and the only change you should make is either remove the last line or change it to import your key so you can ssh to it.
]]>