Converting eth0 to br0 and getting all your LXC or LXD onto your LAN

Wayne has a great post on the new juju lxd work. I’ve been using it a bit and it is awesome. It is super fast and I can create and destroy environments faster than creating and destroying with juju-local.

One thing which I’ve done which has made all LXC and LXD instances more valuable to me, in my home development environment, is to use a bridge to put them directly on my home LAN.

Normally, LXC creates its own device, lxc-br0, which is managed by the lxc-net service. The service creates the device, brings it up, manages the dnsmasq tied to it (which provides DHCP for the range).

Bridge your interface

Instead of using lxc-br0, I create a br0. I add my eth0 (and in my case other devices) to that br0. Then I configure LXC and LXD to use br0 instead of lxc-br0. I go as far as stopping the lxc-net service, since I’m not using it.

There is one trick if you are going to do that on a remote home system, e.g. I have an old laptop I leave in the basement and I’m really lazy and I don’t want to walk down there and use its console when I screw up its networking. The trick is to make sure eth0 comes up on br0 when its added there.

Before you do anything, make sure bridge-utils is installed. It probably is if you are already using lxc, but if this is a fresh install, you’ll want to apt-get install bridge-utils

Edit your /etc/network/interfaces and disable eth0 by setting it to manual. Add it to br0 by adding a new br0 section and listing eth0 in bridge-ifaces and bridge-ports.

auto br0
iface br0 inet dhcp
    bridge-ifaces eth0
    bridge-ports eth0
    up ifconfig eth0 up

iface eth0 inet manual

Now run sudo ifup br0. At this point something magical happens, the DHCP lease is renewed but this time the IP address is bound to br0. The magical part is that br0 used the eth0 MAC to make the DHCP request and so you get the same IP address in response and even your SSH session stays open. YAY!

wikipedia network bridge

LXC can use any bridge

Now configure LXC to use this bridge.

apt-get install lxc
sed -i 's/ = lxcbr0/ = br0/' /etc/lxc/default.conf

TADA, now any LXC containers you start with lxc-start will use br0 and get an address from your household DHCP server. They will be accessible from any host in your home.

Now what about LXD?

LXD can use any bridge

It turns out, while LXD is a layer on top of LXC, it doesn’t use /etc/lxc/default.conf for its default config, but instead uses its own settings. These are editable with lxc profile edit default. Change the lxcbr0 in your editor and save and exit. You can check that it is correct by using lxc profile show default.

There you have it. LXD instances starting on your local LAN.

Now go read Wayne’s post again and use the Juju LXD provider.

Comparative Risk Analysis in IT Systems

You can quote me on this: “comparative risk analysis is among the most cost effective security measure your org can make. Why lock back door when front is wide open?”

What do I mean by this?

I mean, there is little point in applying inconsistent security system analysis in your system. The weakest link in a chain fails. When two links in the chain are identified to both be equally weak, nothing can be gained by spending resources (time and money) to improve the strength of only one link.

Let me get specific.

Lets say you have some services in production which interact. Lets call them Service A and B. You are introducing a new service, Service C. At the time of introduction concerns are raised about some security aspects of C.

Let me be clear, these concerns are 100% valid. Let us say for example that Service C is consuming Services of Service A using an overly privileged account rather than a least privilege account. The correct solution is to introduce a lesser privilege account capable of doing only the operations required by Service C.

From a “is it optimally secure” point of view for deploying Service C. That is all.

Rather than this point of view, lets take an overall systems point of view. Service B is using the exact same overly privileged account to perform operations on Service A. Further, the sources of data which Service B is using (publicly exposed https server accepting GET, POST, PUT, etc) are the same or more than Service C.

What is gained by going back, retooling Service A and C to use that lesser privilege account? Well, security of course. C is less vulnerable.

That is true. You’ve locked the back door while the front door is wide open.


How much did it cost? 80 people hours and the (often difficult to tie to a dollar amount) delay of introducing that much needed Service C.

Was the risk of privilege escalation reduced?

I honestly don’t know.


  • DuckDuckGo search for comparative risk analysis yields some fun reads.
  • In health, it is like counting calories and eating very well while continuing to abuse elicit drugs.
    “I totally would not eat that mcdonalds. It is so gross. Where is my lighter, I need another ciggy.”

Turbo Charging Ubuntu Server and Cloud Images

I do a bit of devops. I get frustrated with waiting even a few seconds longer than I feel like I should. Here is my list of tweaks to Ubuntu Server and Cloud Images.

First, things that make apt-get update fly, because in testing out packages, I find myself running this way too much.

# disable “translations” of Packages index (½ the downloads)
echo Acquire::Languages \"none\"\; | tee /etc/apt/apt.conf.d/02nolang
# disable source repos (½ the downloads)
sed -i -e 's/^deb-src/#deb-src/' /etc/apt/sources.list
# disable universe and multiverse by default
# If it is trusty, you’ll need to
apt-get install software-properties-common
# so that you can...
add-apt-repository -r universe
add-apt-repository -r multiverse
# don’t need 386 packages on amd64
rm -f /etc/dpkg/dpkg.cfg.d/multiarch
sudo dpkg --remove-architecture i386

Next, things which shrink the overall system image. I know it doesn’t seem like much, but I don’t want to wait for updatedb to index unneeded files. I don’t want to wait for xapian on systems on which I never use it. I don’t want to wait for apt-get upgrade to upgrade any of these packages when I don’t use them.

# disable apt-xapian-index - not sure what uses it, I don’t care.
apt-get purge -y apt-xapian-index
# don’t need kernel headers, you won’t compile on your server
apt-get purge -y linux-headers-generic linux-headers-virtual
# if you aren’t using landscape-client you can save 7-10MB:
apt-get purge -y python-twisted-core
# you could go less than ubuntu-minimal by losing x11 stuffs
sudo apt-get purge -y libx11-data xkb-data
# we don’t use interactive commands on our cloud servers - don’t need command not found
apt-get purge -y command-not-found command-not-found-data
# default to —no-install-recommends
echo APT::Install-Recommends \"0\"\;  | sudo tee -a /etc/apt/apt.conf.d/03norecommends
# don’t need ntfs support
sudo apt-get purge -y ntfs-3g

There. Slimmer, tinier images with less updates to download and install.

Juju makes getting to http2 easier

I stumbled upon when looking at http2 options for apache and it seemed to me that it might be easy to add support for http2 on any ubuntu system.

I started by forking the apache2 charm, because I just want to play around.

You can try it out:

juju bootstrap
juju deploy cs:~evarlast/trusty/apache2 –to 0
juju set apache2 ssl_cert=SELFSIGNED
juju set apache2 enable_modules=ssl
juju run –service apache2 ‘a2ensite default-ssl’
juju run –service apache2 ‘service apache2 reload’

Now run `juju status` to get the IP address of the node and try it out.

Open your browsers dev tools and confirm that http2 was used.

Ubuntu 15.10 brings faster add-apt-repository

Wily Werewolf was released yesterday and with it many new things out of the box.

My favorite feature is something that is silly, simple, a tiny patch, and speeds up something I do often.

I work in the cloud and that means I am deploying new machine images many times a day. Anything to speed this up is something that I want.

In Wily, add-apt-repository now has a -u switch.

-u, –update Update package cache after adding

You’ll notice everywhere on the internet where add-apt-repository is used, the next line of instructions is `apt-get update`. This refreshes the package cache for ALL of the configured apt repositories. On a slow machine with slow IO or slow network, this can take more than just a few seconds, possibly a minute or two. This is too long to wait.

The -u option solves this problem. Not only does is remove the need to `apt-get update` by automatically doing it, but it only fetches the package cache for that newly added repository saving much time.

So anywhere on or where you see:

sudo add-apt-repository FOO
sudo apt-get update
sudo apt-get install BAR

add the -u and remove the update command:

sudo add-apt-repository -u FOO
sudo apt-get install BAR

Revel in the time you save.

Ubuntu Cloud Image Based Containers with LXC

At a previous employer, we standardized on Ubuntu cloud images on AWS EC2 and in our OpenStack. You can find the images at If you are using Ubuntu on EC2 or another Certified Public Cloud, then its most likely one of these cloud images.

We leveraged cloud-init and extended an already existing simple management system to allow passing user-data to EC2 instances and OpenStack Nova instances. The use of ephemeral instances proved very powerful and influenced our thinking greatly. We came up with great solutions using these very simple techniques.

Even before I left that job, I longed for an easy way to do the same thing for myself. I played a bit with the AWS CLI tool (the newer python boto based tool) and yes, aws ec2 run-instances –user-data works. I always longed to get the same thing on my home server and on my laptop.

Finally, I figured out how to do this with LXC. Its simple yes, but I finally learned how to do what I want.

tl;dr example:

lxc-create -n crisp-Hadley -t ubuntu-cloud -- -r trusty -S ~/.ssh/ -u one.yaml

cat > one.yaml
 all: "|tee -a /tmp/cloud.out"

 - rm -f /etc/dpkg/dpkg.cfg.d/multiarch
 - for i in 1 2 3 4 5 ; do curl -s | apt-key add - && break ; sleep 2 ; done
 - source: deb stable main
 - source: ppa:evarlast/experimental
 - mongodb
final_message: "The system is finally up, after $UPTIME seconds"
 - service myapp start


Most LXC tutorials that I’ve seen walk the user through using the download template. The download template is not bad for new users, but I want something more powerful. It turns out there are a number of templates available by default in /usr/share/lxc/templates and you can even create your own.

The template I am interested in is the ubuntu-cloud template. These lxc templates are not so much templates at all as they are scripts. Some of them use other scripts called hooks defined in /usr/share/lxc/hooks. The ubuntu-cloud template, defined in /usr/share/lxc/templates/lxc-ubuntu-cloud.


The help for templates is a little hidden and lxc is a little stupid at letting you view the help. You COULD run lxc-create, use the — to pass options to the template and use -h. That has the unfortunate side effect of creating the container anyway. You’d have to lxc-destroy it even though you only used -h. Instead, it is easier to invoke the template directly and get help.

$ /usr/share/lxc/templates/lxc-ubuntu-cloud -h
LXC Container configuration for Ubuntu Cloud images.

Generic Options
[ -r | --release <release> ]: Release name of container, defaults to host
[ --rootfs <path> ]: Path in which rootfs will be placed
[ -a | --arch ]: Architecture of container, defaults to host architecture
[ -T | --tarball ]: Location of tarball
[ -d | --debug ]: Run with 'set -x' to debug errors
[ -s | --stream]: Use specified stream rather than 'tryreleased'

Additionally, clone hooks can be passed through (ie, --userdata). For those,
 /usr/share/lxc/hooks/ubuntu-cloud-prep --help

Here, we see that if we don’t specify the -r option, it defaults to match the host. I’m running vivid on my host, but I’d really like to stick with trusty inside of containers. The -a is interesting, and I can only guess that it only works where compatible. -a i386 would let me use the i386 cloud image on an amd64 host. I can’t think of any other use where a mixing architecture would work in a container.

But there is nothing here about cloud-init

cloud-init via cloud-prep

The last line of help says clone hooks can be passed through. This is useful and IMO the most important item. Run the help for ubuntu-cloud-prep exactly as suggested.

$ /usr/share/lxc/hooks/ubuntu-cloud-prep --help
Usage: ubuntu-cloud-prep [options] root-dir

  root-dir is the root directory to operate on

  [ -C | --cloud  ]:       do not configure a datasource.  incompatible with
                           options marked '[ds]'
  [ -i | --instance-id]:   instance-id for cloud-init, defaults to random [ds]
  [ -L | --nolocales ]:    Do not copy host's locales into container
  [ -S | --auth-key ]:     ssh public key file for datasource [ds]
  [ -u | --userdata ]:     user-data file for cloud-init [ds]

Options for –userdata and –auth-key. Are those what I think they are? It turns out, yes, they work exactly like choosing a public key and user-data when starting an EC2 or Nova instance.

Putting all this together you can create cloud-config yaml files and specify and ssh key, and starting an LXC is just like starting a public cloud instance.

For example, want a postgresql server running?

$ cat > psql.yaml
 all: "|tee -a /tmp/cloud.out"
 - postgresql
 - echo "listen_addresses = '*'" >>/etc/postgresql/9.3/main/postgresql.conf
 - sudo -u postgres createuser -D -R -S myuser
 - sudo -u postgres createdb -E utf8 -O myuser mydb
 - echo host mydb myuser trust >> /etc/postgresql/9.3/main/pg_hba.conf
 - service postgresql restart 
$ lxc-create -n mypostgresql -t ubuntu-cloud -- -r trusty -S ~/.ssh/ -u psql.yaml
$ lxc-start -n mypostgresql
$ lxc-info -n mypostgresql
Name:           mypostgresql
State:          RUNNING
PID:            6899
CPU use:        3.67 seconds
BlkIO use:      168.00 KiB
Memory use:     23.21 MiB
KMem use:       0 bytes
Link:           veth452FOE
 TX bytes:      3.84 KiB
 RX bytes:      20.36 KiB
 Total bytes:   24.20 KiB
$ psql -h -U myuser -d mydb
psql (9.4.2, server 9.3.7)
SSL connection (protocol: TLSv1.2, cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

mydb=> \q

One of my favorite things about using the cloud-image like this is that unlike the download images, openssh server is running and listening by default. The user ubuntu has the public key which you provided in its authorized_keys file. Everything is ready to go.

11 lines of config, 373 bytes is not much at all for a running postgresql server.

When I don’t want to use juju, this is my go to option.

logstash on ubuntu the easy way

The nice folks at elasticsearch package up logstash for debian and ubuntu. It is very easy to use.

$ curl -s | sudo apt-key add -
$ echo "deb stable main" | sudo tee /etc/apt/sources.list.d/logstash.list
$ sudo apt-get update
$ sudo apt-get install logstash

Now you have logstash.

Write a config file and fire it up.

/opt/logstash/bin/logstash -f logstash.conf

Zulu JRE from Azul Systems is a hidden gem Azul Systems, the company that Cliff Click works for, builds their own openjdk version.

If you don’t recall Cliff Click, I was first introduced to him via this awesome video:

If I have to run on the JVM, then this is how I want to run on the JVM.

Zulu isn’t Zing, and yet it is a hidden gem. No more stupid prompts from Oracle. No more being associated with the company that forces you to install the Ask toolbar and other spywear.

The download page is here:

It is the best (only?) way to get openjdk onto your OSX mac.

Better still is the package install on Ubuntu.

sudo apt-key adv --keyserver hkp:// --recv-keys 0x219BD9C9
sudo apt-add-repository "deb stable main"
sudo apt-get update 
sudo apt-get install zulu-8

Zulu includes something called the CCK which says

The Commercial Compatibility Kit (CCK) for Zulu contains additional functionality that is not included in in the OpenJDK source, but which will help ensure compatibility in applications that take advantage of specific additional features that Oracle bundles into HotSpot.

curl -O
sudo dpkg -i zcck8-

Do better Java on Ubuntu.

golang goals

When discussing the Go programming language, I find it useful to always reference the goals of the language. Discussion tends to devolve into a comparison of features of other programming languages which Go lacks. Without the context of these goals, the discussion ceases being useful.

Stolen from a Google tech talk that Rob Pike did back in 2009:


  • The efficiency of a statically-typed compiled language with the ease of programming of a dynamic language.
  • Safety: type-safe and memory-safe.
  • Good support for concurrency and communication.
  • Efficient, latency-free garbage collection.
  • High-speed compilation.

Watch the Go at Google video or read the article and you will get the impression that these goals are NOT in their order of importance. I suggest that the last item, High-speed compilation, trumps all the others.

Here is a short, 1:15 video demonstrating the speed of the go compiler:

The first comment, over 2 years ago, at Lambda the Ultimate, about that Go at Google video, sums it up even better. It is a snapshot of another slide. This time instead of Go goals, the slide is “What makes large-scale development hard with C++ or Java (at least):”

  • slow builds
  • uncontrolled dependencies
  • each programmer using a different subset of the language
  • poor program understanding (documentation, etc.)
  • duplication of effort
  • cost of updates
  • version skew
  • difficulty of automation (auto rewriters etc.): tooling
  • cross-language builds

* Language features don’t usually address these.


It took me quite a while to keep these above things in mind when thinking about Go. In fact, I still tend to compare Go to my favorite programming languages, probably because I often forget some of those above drawbacks to C++/Java (read C# for me).

I’ll try to remember. I beg others to try to remember too.

Testing Out Apache All By Yourself

By all by yourself, I mean, without root.

This is on my Mac running OSX 10.10.

  1. Get yourself an httpd.conf – cp /private/etc/apache2/httpd.conf .
  2. Edit it to use a port >1024 and with user you – Listen 8081 & User jrwren & Group staff
  3. Log to a place you can write – ErrorLog /home/jrwren/errorlog & CustomLog /home/jrwren/access_log combined
  4. Use different pidfile –  PidFile /home/jrwren/ Do this fter the Include /private/etc/apache2/extra/httpd-mpm.conf
  5. Accept mutex –  Mutex file:/home/jrwren
  6. Edit whatever else you want – ProxyPass / http://localhost:8080 & SetOutputFilter DEFLATE to see that Apache proxy does gzip for you
  7. Start httpd – httpd -d . -f httpd.conf -X

babblings of a computer loving fool