jrwren – Jay R. Wren – lazy dawg evarlast http://jrwren.wrenfam.com/blog babblings of a computer loving fool Wed, 15 Feb 2017 02:57:57 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.2 Using the haproxy charm http://jrwren.wrenfam.com/blog/2017/02/14/using-the-haproxy-charm/ http://jrwren.wrenfam.com/blog/2017/02/14/using-the-haproxy-charm/#respond Wed, 15 Feb 2017 02:57:57 +0000 http://jrwren.wrenfam.com/blog/?p=1285 Continue reading "Using the haproxy charm"]]> The haproxy charm in the charmstore https://jujucharms.com/haproxy/ is deceptively powerful. I recently had a use case which I thought it would not handle. It turns out, it does.

The details are all in the services config value. In my case, I am replacing the apache charm https://jujucharms.com/apache2/ using balancer and reverseproxy relations.

The apache charm has vhost_https_template and vhost_http_template which gets pasted in as apache httpd config. The haproxy charm services config value has service_options as yaml which works much the same.

In my case, port 80 redirects to 443, so I start with this in yaml:

- service_name: haproxy_service
  service_host: "0.0.0.0"
  service_port: 80
  server_options: maxconn 100 cookie S{i} check
  service_options:
      - 'redirect scheme https code 301 if !{ ssl_fc }'

You’ll notice this is almost identical to the default value for the services config:

- service_name: haproxy_service 
  service_host: "0.0.0.0" 
  service_port: 80 
  service_options: [balance leastconn, cookie SRVNAME insert] 
  server_options: maxconn 100 cookie S{i} check

I only changed the service_options entry with what would redirect haproxy in a frontend config.

This is where the magic of this haproxy charm happens. The haproxy charm knows which config values are for a frontend haproxy section and also for a backend haproxy section. The charm automatically puts the value in the right place.

The next thing which wasn’t obvious to me from reading the haproxy charm readme is that the Juju application related using reverseproxy relation becomes a backend section and its values will be merged from the services config.

e.g.

$ juju add-relation my-app haproxy:reverseproxy
$ juju add-relation kibana haproxy:reverseproxy

I can use defaults, or I can make some tweaks to the my-app and kibana applications.

For my use case, I was using apache httpd config RewriteRule ^/?KIBANA/(.*)$ balancer://kibana/$1 [P,L]

The equivalent in haproxy config looks like this:

    acl path_kibana path -m beg  /KIBANA/
    use_backend kibana if path_kibana

and in the kibana backend:

   reqirep  ^([^\ :]*)\ /KIBANA/(.*)     \1\ /\2

The haproxy charm allows all of this to be configured using services config. The related application is automatically set as a service name and your config must match it via service_name yaml.

I use juju2’s juju config command to set the config directly from a yaml file. If you are using juju1 you’ll need to use juju set command. juju config haproxy services=@haproxy-config-services.yaml

- service_name: my-app
  service_host: "0.0.0.0"
  service_port: 443
  crts: [DEFAULT]
  service_options:
      - balance leastconn
      - reqadd X-Forwarded-Proto:\ https
      - acl path_kibana path -m beg  /KIBANA/
      - use_backend kibana if path_kibana
  server_options: maxconn 100 cookie S{i} check
- service_name: kibana
  service_options:
      - balance leastconn
      - reqirep  ^([^\ :]*)\ /KIBANA/(.*)     \1\ /\2
      - rspirep ^Location:\ https?://[^/]+/(.*) Location:\ /KIBANA/\1
      - rspirep ^(Set-Cookie:.*)\ Path=(.*) \1\ Path=/KIBANA/\2
  server_options: maxconn 100 cookie S{i} check

The implication here is that there is another application, “my-app” which is also related to haproxy. This config tells haproxy to use my-app as the default application, but if the url starts with /KIBANA/, to use the kibana backend instead of the “my-app” backend. For completeness, I am including the equivalant of apache’s ProxyPassReverse and ProxyPassReverseCookiePath. These are the rspirep… Location and rspirep…Set-Cookie lines in the config respectively.

]]>
http://jrwren.wrenfam.com/blog/2017/02/14/using-the-haproxy-charm/feed/ 0
cross compiling go and go install v. go build and caching the results http://jrwren.wrenfam.com/blog/2017/01/26/cross-compiling-go-and-go-install-v-go-build-and-caching-the-results/ http://jrwren.wrenfam.com/blog/2017/01/26/cross-compiling-go-and-go-install-v-go-build-and-caching-the-results/#respond Thu, 26 Jan 2017 19:30:50 +0000 http://jrwren.wrenfam.com/blog/?p=1277 Continue reading "cross compiling go and go install v. go build and caching the results"]]> I am following Dave Chaney’s advice from here https://dave.cheney.net/2015/08/22/cross-compilation-with-go-1-5

I am on OSX using go installed from homebrew, so the writability of GOROOT in /usr/local/Cellar… is not an issue as stated in Dave’s post.

How can I reap the benefit of cached package builds when cross compiling?

`go install` uses the cache and places the resulting binary in $GOPATH/bin/$GOOS_$GOARCH/ instead of in $GOPATH/bin/

Of course, now that I’m writing this as a blog post, for myself, I see this is already mostly documented in the link from Dave’s post to medium: https://medium.com/@rakyll/go-1-5-cross-compilation-488092ba44ec#.6ue7ljf7v including a nice command to cross compile the std library in the system cache.

You can populate the stdlib cross compile pkg cache in GOROOT by running this command, changing env vars for each platform you wish to target:

GOOS=darwin GOARCH=amd64 sudo -E go install std

Now your cross compiles do not have to recompile the standard library packages.

]]>
http://jrwren.wrenfam.com/blog/2017/01/26/cross-compiling-go-and-go-install-v-go-build-and-caching-the-results/feed/ 0
Continuous Delivery via Unattended Upgrades http://jrwren.wrenfam.com/blog/2017/01/17/continuous-delivery-via-unattended-upgrades/ http://jrwren.wrenfam.com/blog/2017/01/17/continuous-delivery-via-unattended-upgrades/#respond Tue, 17 Jan 2017 21:00:23 +0000 http://jrwren.wrenfam.com/blog/?p=1273 Continue reading "Continuous Delivery via Unattended Upgrades"]]> I’ll be the first to admit that this is pretty slow for continuous delivery, as the default configuration for unattended upgrades is daily. Adjust the cron configuration at your discretion.

Givens:

  • CI system which builds apt source packages and dputs them to a PPA.
  • Machine instances configured with these PPA and with unattended-upgrade

The unattended-upgrades package, by default only installs security updates. We can configure it to install updates to packages in our PPA by adding the correct package origin to the config. We get the package origin from apt-cache policy.

e.g.

$ apt-cache policy

500 http://ppa.launchpad.net/evarlast/experimental/ubuntu/ trusty/main amd64 Packages
release v=14.04,o=LP-PPA-evarlast-experimental,a=trusty,n=trusty,l=ex per ee m3nt4l,c=main
origin ppa.launchpad.net

Extract that LP-PPA-evarlast-experimental from that output and add it to a new section in /etc/apt/apt.conf.d/50unattended-upgrades. If you want, use `cat >> /etc/apt/apt.conf.d/50unattended-upgrades`

Unattended-Upgrade::Origins-Pattern {
“origin=LP-PPA-evarlast-experimental”;
};

Now when unattended-upgrades run, packages from that PPA are considered important enough that they will be installed.

The details for the configuration are in the README here: https://github.com/mvo5/unattended-upgrades

 

]]>
http://jrwren.wrenfam.com/blog/2017/01/17/continuous-delivery-via-unattended-upgrades/feed/ 0
A Ceph+radosgw for Storage for Dev/Test/QA in a Few Minutes http://jrwren.wrenfam.com/blog/2016/11/21/a-cephradosgw-for-storage-for-devtestqa-in-a-few-minutes/ http://jrwren.wrenfam.com/blog/2016/11/21/a-cephradosgw-for-storage-for-devtestqa-in-a-few-minutes/#respond Mon, 21 Nov 2016 19:36:53 +0000 http://jrwren.wrenfam.com/blog/?p=1269 Continue reading "A Ceph+radosgw for Storage for Dev/Test/QA in a Few Minutes"]]> It turns out when I was adding Swift (object storage) support to a project I really wanted to also have Ceph support via radosgw, another object storage system.

It turns out that deploying Ceph is not trivial or easy. However, Juju helps manage the complexity of Big Software.

There are a couple of tricks to running Ceph on LXD.

  1. Specify a full path which does not start with /dev for the osd-devices option.
  2. use-direct-io: false on ZFS.

You can add these options to a bundle and deploy with a single command using a single bundle file like this:

$ cat > ceph-bundle.yaml
services:
 ceph-mon:
 charm: cs:~openstack-charmers-next/xenial/ceph-mon
 num_units: 3
 to:
 - '1'
 - '2'
 - '3'
 ceph-osd:
 charm: cs:~openstack-charmers-next/xenial/ceph-osd
 num_units: 3
 options:
 osd-devices: /srv/ceph-osd
 osd-reformat: 'yes'
 use-direct-io: false
 to:
 - '1'
 - '2'
 - '3'
 ceph-radosgw:
 charm: cs:~openstack-charmers-next/xenial/ceph-radosgw
 num_units: 1
 options:
 use-embedded-webserver: true
 to:
 - 1
relations:
- - ceph-osd:mon
 - ceph-mon:osd
- - ceph-radosgw:mon
 - ceph-mon:radosgw
machines:
 '1':
 constraints: arch=amd64
 series: xenial
 '2':
 constraints: arch=amd64
 series: xenial
 '3':
 constraints: arch=amd64
 series: xenial
^D
$ juju deploy ceph-bundle.yaml

Wait a while. Watch juju status. Then, see if radosgw is up and try s3cmd

$ curl 10.0.5.117
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
$ juju ssh ceph-radosgw/0 'sudo radosgw-admin user create --uid="ubuntu" --display-name="Ubuntu Ceph"'
{
 "user_id": "ubuntu",
 "display_name": "Ubuntu Ceph",
 "email": "",
 "suspended": 0,
 "max_buckets": 1000,
 "auid": 0,
 "subusers": [],
 "keys": [
 {
 "user": "ubuntu",
 "access_key": "O5W6PMIQZ83ODYCGVIGJ",
 "secret_key": "6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA"
 }
 ],
 "swift_keys": [],
 "caps": [],
 "op_mask": "read, write, delete",
 "default_placement": "",
 "placement_tags": [],
 "bucket_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 },
 "user_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 },
 "temp_url_keys": []
}
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 mb s3://testb
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 put ceph.yaml s3://testb/ceph.yaml
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls
2016-11-16 20:23  s3://testb
$ s3cmd --host=10.0.5.117 --host-bucket=10.0.5.117 --access_key=O5W6PMIQZ83ODYCGVIGJ --secret_key=6aqf5vyRONOvGFJkvH65xW7ttxZIKNZx0c2cPMTA --signature-v2 ls s3://testb
2016-11-16 20:24 2581 s3://testb/ceph.yaml

Dev and Test as you wish.

It should go without saying, but I will write it anyway: do not use Ceph on LXD like this in production. Ceph must be scaled out to different machine nodes.

]]>
http://jrwren.wrenfam.com/blog/2016/11/21/a-cephradosgw-for-storage-for-devtestqa-in-a-few-minutes/feed/ 0
nodejs 7 on ubuntu http://jrwren.wrenfam.com/blog/2016/10/26/nodejs-7-on-ubuntu/ http://jrwren.wrenfam.com/blog/2016/10/26/nodejs-7-on-ubuntu/#respond Wed, 26 Oct 2016 18:07:00 +0000 http://jrwren.wrenfam.com/blog/?p=1267 Continue reading "nodejs 7 on ubuntu"]]> nodejs 7 was released and nodesource does an excellent job of creating packages of nodejs for use on many operating systems.

I refuse to curl $URL and pipe the results to bash. It scares me (maybe illogically) to trust a script on the internet with access to my local shell.

The commands without the curl pipe to shell are almost as short and run faster*. It is super easy to get nodejs 7.x installed your ubuntu xenial or yackety system.

curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | sudo\ apt-key add -
sudo add-apt-repository -u "deb https://deb.nodesource.com/node_7.x $(lsb_release -c -s) main"
sudo apt install nodejs

* The apt update command is intentionally skipped. The -u option to add-apt-repository optimizes it to only merge available packages with the newly added repository. This is a bit faster, or a lot faster on older machines or slow cloud instances.

 

]]>
http://jrwren.wrenfam.com/blog/2016/10/26/nodejs-7-on-ubuntu/feed/ 0
A Swift for Storage for Dev/Test/QA in 2 Minutes http://jrwren.wrenfam.com/blog/2016/10/07/a-swift-for-storage-for-devtestqa-in-2-minutes/ http://jrwren.wrenfam.com/blog/2016/10/07/a-swift-for-storage-for-devtestqa-in-2-minutes/#comments Fri, 07 Oct 2016 19:26:26 +0000 http://jrwren.wrenfam.com/blog/?p=1265 Continue reading "A Swift for Storage for Dev/Test/QA in 2 Minutes"]]> I was adding swift support to a project and it became apparent that things would be easier if I had a local swift to which I could connect.

I needed to familiarize myself with the API and its behavior and doing this locally rather than over a VPN on some production system with a QA tenant would make things a lot easier.

I could have used Devstack. Devstack is great if you are developing OpenStack. That is for what it is made. It uses the OpenStack source. That seemed overkill to me.

What I ended up with is a cloud-config file which I pass to cloud-init on system start. I use LXD to start a container. In less than 2 minutes later, on my 10 year old home server, I have swift up and running and responding to my commands.

$ lxc launch -e ubuntu:16.04 $(petname) -c user.user-data="$(cat swift.yaml)"
Creating testy-Abril
Starting testy-Abril
$ lxc list
+-----------------+---------+-----------------------+-+-----------+---+
| testy-Abril     | RUNNING | 10.0.5.169 (eth0)     | | EPHEMERAL | 0 |
+-----------------+---------+-----------------------+-+-----------+---+
$ swift --user admin:admin --key
admin -A http://10.0.5.169:8080/auth/v1.0 list
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 upload t README.md
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list
t
$ swift --user admin:admin --key admin -A http://10.0.5.169:8080/auth/v1.0 list t
README.md

No need to ssh into the container at all. Just start it, wait a bit for things to install, and a swift API is up and running.

The swift.yaml file is here on github and the only change you should make is either remove the last line or change it to import your key so you can ssh to it.

]]>
http://jrwren.wrenfam.com/blog/2016/10/07/a-swift-for-storage-for-devtestqa-in-2-minutes/feed/ 1
Scaling Apache httpd as a ReverseProxy http://jrwren.wrenfam.com/blog/2016/09/27/scaling-apache-httpd-as-a-reverseproxy/ http://jrwren.wrenfam.com/blog/2016/09/27/scaling-apache-httpd-as-a-reverseproxy/#comments Tue, 27 Sep 2016 21:18:33 +0000 http://jrwren.wrenfam.com/blog/?p=1262 Continue reading "Scaling Apache httpd as a ReverseProxy"]]> We recently had the need to make sure our front end apache httpd reverse proxy and ssl termination server could handle the larger number of websocket connections we are going to use with it. Given websockets are longer lived connections, this is a different use of apache httpd and we want to get it right. The proxied service is capable of handling tens of thousands of concurrent connections, if not hundreds of thousands or more.

First, our testing tool is custom made, it makes all the websocket connections first and then proceeds to ping. This is important as it exercises the concurrent connections capabilities of httpd. When using it, the client system needs the ability to create enough sockets. The first limit I encountered was with my test client system. The shell environment defaults to 1024 open files limited. It is a soft limit, so use ulimit -S to adjust the limit. Even ab will show an error of “socket: Too many open files (24)” if you use -n 1050 and -c 1050 options.

$ ulimit -n
1024
$ ulimit -Hn
65536
$ ulimit -Sn 65536
$ ulimit -n
65536

Now, your testing tool can create more than 1024 connections. The next limit I ran into was that of connections on the httpd server. Even mpm_event uses thread per request (do not let the event name fool you). The default ubuntu apache2 mpm_event configuration allows for 150 concurrent connections:

 StartServers 2
 MinSpareThreads 25
 MaxSpareThreads 75
 ThreadLimit 64
 ThreadsPerChild 25
 MaxRequestWorkers 150
 MaxConnectionsPerChild 0

A tool like ab won’t halt at 150. A tool named slowhttptest is in xenial/universe. Run apt install slowhttptest to install it. It is a flexible tool and has a great man page and -h help output.

$ slowhttptest -c 1000 -H -g -o my_header_stats -i 10 -r 200 -t GET -u http://system.under.test.example.com/ -x 24 -p 3

slowhttptest version 1.6
– https://code.google.com/p/slowhttptest/ –
test type: SLOW HEADERS
number of connections: 1000
URL: http://system.under.test.example.com/
verb: GET
Content-Length header value: 4096
follow up data max size: 52
interval between follow up data: 10 seconds
connections per seconds: 200
probe connection timeout: 3 seconds
test duration: 240 seconds
using proxy: no proxy

Tue Sep 27 14:33:03 2016:
slow HTTP test status on 5th second:

initializing: 0
pending: 284
connected: 667
error: 0
closed: 0
service available: YES

This screen will update as connections are created until service available changes from YES to NO.

In my tests it closed: value was exactly 150. I can view the my_header_stats.csv file to see when max was reached.

Next, lets adjust Apache httpd to allow for more concurrent connections. My target is 15,000 connections, so I’ll increase numbers linearly 2 processes (StartServers) with 75 threads each (ThreadsPerChild) gave 150 connections. 20 processes with 750 threads each should give 15,000 connections.

Edit mpm_event.conf: ($ sudo vi /etc/apache2/mods-enabled/mpm_event.conf)

<IfModule mpm_event_module>
 StartServers 10
 MinSpareThreads 25
 MaxSpareThreads 750
 ThreadLimit 1000
 ThreadsPerChild 750
# MaxRequestWorkers aka MaxClients => ServerLimit *ThreadsPerChild
 MaxRequestWorkers 15000
 MaxConnectionsPerChild 0
 ServerLimit 20
 ThreadStackSize 524288
</IfModule>

Restart (full restart, not graceful – ThreadsPerChild change requires this) apache2 httpd and retry the slowhttptest. Notice service available is always YES.

Now turn up the slowhttptest numbers. Change the -c parameter to 15000 and the -r to 1500. It should take 10sec to ramp up the connections. In my use case I could not create that many connections so quickly. slowhttptest was maxing out a CPU core.

All of the above apache httpd config was done using the mpm_event processing module. The next issue I ran into was a case of mpm_worker not behaving as I expected. I have a doubly proxied system, because this is super real world where we route http things all over the place, sometimes in ways we shouldn’t but because we are lazy, or it is easier or… anyway…

In ubuntu/trusty with apache httpd 2.4.7 mpm_worker has a limit of 64 ThreadsPerChild even if you configure it with a larger number. There is no warning. You’d never know unless you take a look at the number of processes in a worker: $ ps -uwww-data -opid,ppid,nlwp  The fix is to switch from mpm_worker to mpm_event.

$ sudo a2dismod mpm_worker
$ sudo a2enmod mpm_event
$ sudo service apache2 restart

I thought that I’d need to do more, but this got me to where I needed to be.

]]>
http://jrwren.wrenfam.com/blog/2016/09/27/scaling-apache-httpd-as-a-reverseproxy/feed/ 1
Ubuntu Kiosk http://jrwren.wrenfam.com/blog/2016/08/04/ubuntu-kiosk/ Fri, 05 Aug 2016 00:59:08 +0000 http://jrwren.wrenfam.com/blog/?p=1254 Continue reading "Ubuntu Kiosk"]]> This post is a work in progress. I’ll update it as I tweak the solution.

Last Wednesday I was helping a friend build a Kiosk. We tried to follow https://thepcspy.com/read/building-a-kiosk-computer-ubuntu-1404-chrome/ but it didn’t work. It turns out between using the wrong version of ubuntu (16.04 instead of 14.04) and doing it in a virtual machine, we were all messed up.

There has to be a better way.

There is a secret to debian/ubuntu packages. If you aren’t trying to get them included in debian/ubuntu, you can break most of the rules and get them to do whatever you want. I figured I should be able to use this and make creating a kiosk as easy as apt install kiosk

TL;DR: you can try this by running these two commands on a new ubuntu-server installation:

add-apt-repository ppa:evarlast/kiosk
apt install --no-install-recommends kioskme

The rest of the post describes how I did this.

First, I’m going to create a new PPA on launchpad just for this, so that a user can `add-apt-repository ppa:evarlast/kiosk`

I visit https://launchpad.net/~/+activate-ppa and fill in the fields with kiosk and click activate.

Next, I start a new deb. I may as well build it from source. There might be a better way, but I’ve gotten to know dh (debhelper) a bit, so I’m going to use it.

$ mkdir kioskme ; cd kioskme
$ cat > Makefile
build:
<tab>echo noop
install:
<tab>install -d 755 ${DESTDIR}/usr/bin
<tab>install -m 755 kioskme ${DESTDIR}/usr/bin/kioskme
^D
$ cat > kioskme
#!/bin/bash
xset -dpms
xset s off 
openbox-session & 
start-pulseaudio-x11 
while true; do 
  rm -rf ~/.{config,cache}/chromium/ 
  chromium-browser --kiosk --no-first-run 'http://duckduckgo.com' 
done
^D

Now debianize this script directory using dh_make:

dh_make -p kioskme_0.0.0 --createorig -s

Now customize the deb with a service, preinst for user creation and some dependencies:

$ cat > debian/service
[Unit]
Description=kioskme

[Service]
Type=simple
Restart=on-failure
User=kioskme
Group=kioskme
ExecStart=/usr/bin/startx /etc/X11/Xsession /usr/bin/kioskme
^D
$ cat > debian/preinst
#!/bin/sh

set -e

. /usr/share/debconf/confmodule

case "$1" in
 install|upgrade)
 if ! getent group kioskme >/dev/null; then
 addgroup --system kioskme >/dev/null
 fi
 if ! getent passwd kioskme >/dev/null; then
 adduser \
 --system \
 --disabled-login \
 --ingroup kioskme \
 --gecos kioskme \
 --shell /bin/false \
 kioskme >/dev/null
 fi
 mkdir -p /var/log/kioskme
 chown kioskme:kioskme /var/log/kioskme
 setfacl -m u:kioskme:rw /dev/tty0 /dev/tty7
 ;;

 abort-upgrade)
 ;;

 *)
 echo "preinst called with unknown argument \`$1'" >&2
 exit 1
 ;;
esac

# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.

#DEBHELPER#

exit 0

Alright, maybe that preinst is a bit big. I copy it around and fill it out like a template for services I put into debs.

Now edit the debian/control file to add dependencies, change the section to utils, fill in whatever else you want, set Depends to look like this:

Depends: ${shlibs:Depends}, ${misc:Depends}, Xorg, openbox, chromium-browser, pulseaudio

Now create the deb:

fakeroot debian/rules clean build binary

To test the deb, I copy it to a fresh ubuntu server install and dpkg -i to install it. I get a bunch of errors because dpkg -i doesn’t resolve dependencies, but I run apt install -f and the dependencies are installed.

Once I tested and tweaked and got things working, I updated the tarball `tar -Jcf ../kioskme_0.0.0-0.orig.tar.xz -C ..  –exclude=’debian’ kioskme` and I used dpkg-buildpackage -S to build a source package and then I used dput ppa:evarlast/kiosk ../kioskme_0.0.0-1_source.changes to upload to PPA.

Now, this still does not work in a VM. Ubuntu desktop installer must do some magic to make X work in a virtual machine with a driver which works with VMWare, VirtualBox, or Parallels.

 

]]>
Some LXD containers on a hidden net, others on your lan http://jrwren.wrenfam.com/blog/2016/08/01/some-lxd-containers-on-a-hidden-net-others-on-your-lan/ Mon, 01 Aug 2016 16:08:42 +0000 http://jrwren.wrenfam.com/blog/?p=1250 Continue reading "Some LXD containers on a hidden net, others on your lan"]]> Back in November I wrote about Converting eth0 to br0 and getting all your LXC or LXD onto your LAN

It works, but you might not want ALL of your LXD on your LAN.

You’ll still need your LAN interface to be a br0 instead of a device that isn’t a bridge. Go follow the Bridge your interface section of that post to convert your eth0 to br0.

I’ve fully converted to using LXD. I don’t even remember if LXC supports profiles. I think it does, so I think the same idea could be applied to LXC, but I’m only showing this for LXD.

First, copy the default profile:

lxc profile copy default lanbridge

Second, edit the new profile to use br0 instead of lxdbr0:

lxc profile device set lanbridge eth0 parent br0

Third and finally, start instances with that profile:

lxc launch ubuntu-xenial -p lanbridge

In my case, this instance is on my local lan AND on public ipv6 space (thanks Comcast).

heritable-gale    | RUNNING | 192.168.15.172 (eth0) | 2601:400:8000:5ab3:216:3eff:fe73:d242 (eth0)

 

]]>
Cloud-config with LXD http://jrwren.wrenfam.com/blog/2016/07/29/1246/ Fri, 29 Jul 2016 20:43:39 +0000 http://jrwren.wrenfam.com/blog/?p=1246 Continue reading "Cloud-config with LXD"]]> A year ago I wrote http://jrwren.wrenfam.com/blog/2015/05/26/ubuntu-cloud-image-based-containers-with-lxc/

Since then, LXD became the best way to use LXC.

By default, LXD already uses ubuntu-cloudimg images.

The lesser know feature is using cloud-config with LXD. It turns out it is very easy to pass user-data to an LXD instance when you start it, just like you would on any cloud provider.

LXD even has the -e option to make your LXD instance ephemeral. It will be deleted automatically when you stop it.

Just like in that previous blog post, I create a file named one.yaml. The name can be anything. Then i start it:

lxc launch ubuntu:14.04 crisp-Hadley -c user.user-data="$(cat one.yaml)"

That is all there is to it.

Here is an example of config similar to what I used recently to QA a build configuration:

#cloud-config
output:
 all: "|tee -a /tmp/cloud.out"
#hostname: {{ hostname }}
bootcmd:
 - rm -f /etc/dpkg/dpkg.cfg.d/multiarch
apt_sources:
 - source: ppa:yellow/ppa
ssh_import_id: [evarlast] # use -S option
packages:
 - make
final_message: "The system is finally up, after $UPTIME seconds"
runcmd:
 - cd /home/ubuntu
 - git clone https://www.github.com/jrwren/myproject
 - cd myproject
 - make deps run
]]>