Version from debian/changelog

Almost two years ago I did some scripting of updating debian/changelog and building a package to enable a CI environment for some software. I wanted to parse the changelog correctly and so I copied and changed some perl from the source of dpkg-buildpackage. This turned out to be the wrong solution.

There is a nice tool called dpkg-parsechangelog. You can get just the version for use in scripts with this simple awk:

dpkg-parsechangelog | awk ‘/Version/ { print $2 }’

I didn’t even think to write about it until I ran across someone else who’d written some perl to do exactly the same thing. Dear world, we need to stop reinventing this wheel.

Ubuntu Xenial 16.04 Has All The Good Stuff

A couple of days ago, Ubuntu Xenial was released. There is a press release with some good stuff in it.

I’ve been looking forward to this release for the following reasons:

  • Postgresql 9.5
  • systemd
  • haproxy 1.6.3
  • uwsgi 2.0.12
  • nginx 1.9.15

I know, it doesn’t look that exciting until you recall that the last LTS release of Ubuntu, Trusty, 14.04, was missing fabulous features OOTB in each of these components.

Postgresql 9.3 did not have the the awesome JSONB improvements of 9.4 and 9.5

haproxy 1.4 didn’t have ssl support.

uwsgi… well latest uwsgi is just always great to have.

nginx 1.9.15 has http2 support, out of the box!

Finally, while I loved upstart, systemd is nice and has been rock solid.

This is the greatest Ubuntu ever. I’ve not even mentioned how awesome lxd is on it. That is covered elsewhere. This is just my personal little list. Thanks Ubuntu.

MacGyvering Windows 8.1 Remote Assistance

My Mother called me up rather frazzled this evening.

This isn’t too surprising. Since her stroke 16 years ago she can sometimes become confused or forget simple things, things she once knew.

Tonight, the cause of her frazzled state was her computer.

After listening to her rant and ramble about her computer, I quickly realized that she had some web browser pop-up phishing telling her she had a virus. Partly because of who she is, and partly because of brain damage from stroke, she called the phone number that the pop-up displayed. When they told her they can fix it for $199 and if she took it to Best Buy, they would charge her $350-$400, this fueled her worry.

After some calming I finally had her start the Windows Remote Assistance application, but unfortunately she has forgotten what saving files actually means and she has no email configured. So she is unable to save the remote assist file and she can’t use Windows Remote Assistance to automatically email the request to me. It was at this point that I suggested she mail the laptop to me. I also may have said, “never again!” when I agree to support a laptop that someone else gifted her.

But, I couldn’t let it go. This was a challenge and I love a challenge.

I searched around a bit and tried my hand at the msra.exe command line. After a bit of trial and error, I realized I can have her open a powershell and type

msra /saveasfile helpme 12345678

Yes, I’m ok with the 12345678 password in this case. Trying some other password over the phone and having her type it was error prone.

“Did you say bee?”

“No I said pee, like Paul.”

“Bee like ball?”

“No…”

I still needed a way to get a file to me. I’ve had an aversion to PowerShell ever since it launched, despite tech reviewing a very fine PowerShell book. I knew it was probably my best bet at getting a file to me. After a bit of poking I found the invoke-webrequest helper, thingy. I don’t know PowerShell terminology. It looks like a function to me.

I have my home server on the internet. Its running Ubuntu Linux  and I’ve had 4 line php upload scripts with html forms that let people send me files for years. Could I use this?

The shoelace was there. The paperclip was there. Did I also have some bubble gum?

All I really needed as an index.php in a /mom/ directory that looked like this:

<?php
file_put_contents('err.out', file_get_contents('php://input'));
?>

Wow that is some trivial stuff. Bland bubble gum, I guess.

Why an index.php and a /mom/? Well, because that will be easy for me to relay over the telephone.

I did some testing and found invoke-webrequest works nicely coupled with this http request body dumping php.

invoke-webrequest -uri jrwren.xmtp/mom/ -infile .\helpme
.msrcincident -method post

I was able to call my mom back, tell her, to press windows key-r, reminding her that windows key is usually between the ctrl and alt on the keyboard, and to type powershell and press enter.

“Powershell, P-O-W-E-R-S-H-E-L-L- no spaces?”

“Yup”

On first try, I tried to have her use the password 1234, but msra.exe complained that it was too short. Working through this mistake, I tried to have her use the up arrow to edit the previously executed command line in powershell.

“What is the up arrow?”

This honestly dumbfounded me and I had absolutely no idea what to do for a minute or so.

“The up arrow on my keyboard is on the right. There is an inverted tee of arrows, left right up down to the left of my left control key.”

Whew, I got lucky and she found it.

Once we had the msra.exe create the helpme file, I had her type out the invoke-webrequest command, prompting her to press tab after typing helpme to autocomplete the file extension.

The multiline color output of running the command shocked and surprised her. It maybe even scared her a little bit, but as she was reading it aloud, I heard her say, “200 OK”

“200 OK is great”, I said.

I checked my server and there was an err.out file along side the index.php. The only two files in the mom directory.

My home server always has samba setup. I used Windows Explorer to navigate to H:\public_html\mom and I renamed err.out to helpme.msrcincident. I double clicked it.

Mom said, “Oh what is this? jrwren wants to share your computer.”

I rejoiced inside.

The hard part being done, I was able to connect and control her computer. Microsoft has done a very nice job with Windows Remote Assist, ever since Windows 7. I’m impressed that my Windows 7 can connect flawlessly to her Windows 8.1. I’m thankful that PowerShell is out of the box in all versions of windows. I do not think I’d have been able to walk her through this over the phone with this few keystrokes without PowerShell.

To the evil con artists who extort money from poor little old disabled ladies who work two jobs: please stop.

Optimizing uwsgi for Many Many Threads and Processes

tl;dr : Consider optimizing uwsgi by setting `threads-stacksize = 64` or some small value in your uwsgi config. Python apps which do not use many C modules do not use the C stack very much. A smaller stack size mean threads use less memory and you can safely have more of them servicing requests.

Long story:

Years ago I was deploying a new flask web service using uwsgi. I needed it to scale to thousands of connections. I read a blog post (I searched and cannot find it now) which suggested 10 processes with 10 threads each to be able to serve 100 concurrent connections. After testing and tuning this particular app, we settled on 10 processes and 100 threads per process. It ran well.

Recently, a production app, which I helped deploy, fell on its face. It was performing very poorly, seemingly out of nowhere. This app was originally deployed with the same 10 processes, 100 threads per process configuration which I had used so successfully in the past. The ops team had already reduced the process count to 4 due to excessive memory use of the application. This means the application was only able to service 400 concurrent connections.

I still cannot entirely explain why the app ran for many months and then suddenly had problems. I’m guessing it is because of recent announcements driving more traffic to the site. The 400 threads were actually being used instead of sitting idle waiting for connections.

In the process of trying to restore service, our ops team wisely used a tool which I was not likely to have used (huge thanks to them). The tool is pmap and it shows mapped memory for a given process. I noticed something interesting in the output of pmap:

00007fcf75061000   8192K rw---   [ anon ]
00007fcf75861000      4K -----   [ anon ]
00007fcf75862000   8192K rw---   [ anon ]
00007fcf76062000      4K -----   [ anon ]

This was repeated with the same memory increment 50 times for a total of 100. It occurred to me that the default stack size of a thread in Linux is 8MB and that these memory maps were the stack of each thread. I was able to confirm this suspicion by running the app myself and adjusting the size by configuring uwsgi with –threads-stacksize.

I started by moving to 1MB which I know is the default Windows thread stack size, guessing it would still be plenty. Then I started to play limbo and see how low can I go. I started to get pretty happy when I broke the 256KB mark and our app was still functioning. Our app has the luxury of not having any deep calls. I might have been able to go lower, but once I got to 64KB, I didn’t see my point. Every order of magnitude decrease was smaller and smaller an improvement.

Moving from 8MB to 1MB took memory usage from 3.2GB to 400MB. Every halving of stack size halved overall memory usage of the thread stacks by this app. First 512KB/thread for 200MB, then 256KB/thread for 100MB, then 128KB/thread for 50MB, then 64KB/thread for 25MB. At this point, everything about the app was running exactly the same, the only difference being that I wasn’t wasting 3.2GB of memory in unused thread stacks.

 

I Welcome Parse Developers to Juju

Hello Parse developers,

I was curious how easy it would be to get the published parse-server-example to run with Juju. The end result is that there is a new juju charm named parse-server available.*

Deploying parse-server is as easy as running these commands in a bootstrapped juju environment. This means that it can run ANYWHERE.

juju deploy cs:~evarlast/trusty/parse-server-0
juju deploy mongodb
juju add-relation parse-server mongodb
juju expose parse-server

You’ll then be able to use the http api at port 1337.

For example:

curl -X POST -H “X-Parse-Application-Id: myAppId” -H “Content-Type: application/json” -d ‘{“whatever”:”data”}’ 10.0.3.247:1337/parse/functions/hello

If you wish to take a look at this charm, its in the charmed branch of my fork of the parse-server-example. I do not recommend using this charm as an example of writing a good production charm. This is an example of a quick and dirty hack of a charm which happens to work.

Part of what makes Juju awesome is the magic of application modeling. While my hack of a parse-server charm isn’t production ready, it is building on a very production ready mongodb charm, which can be scaled out and made HA very easily. Charms are reusable open source ops. The mongodb ops have been captured in the mongodb charm. Any required parse-server ops need to be capture in a parse-server charm. The only ones captured so far are configuring the mongodb relation. While its a hack of a demo charm, it is a start.

 

—-

* The real reason is that I have cloud envy and I saw the azure release at https://azure.microsoft.com/en-us/blog/azure-welcomes-parse-developers/ and I thought to myself, gee that is a lot of clicks, seems like there is a better way.

Converting eth0 to br0 and getting all your LXC or LXD onto your LAN

Wayne has a great post on the new juju lxd work. I’ve been using it a bit and it is awesome. It is super fast and I can create and destroy environments faster than creating and destroying with juju-local.

One thing which I’ve done which has made all LXC and LXD instances more valuable to me, in my home development environment, is to use a bridge to put them directly on my home LAN.

Normally, LXC creates its own device, lxc-br0, which is managed by the lxc-net service. The service creates the device, brings it up, manages the dnsmasq tied to it (which provides DHCP for the 10.0.3.0/24 range).

Bridge your interface

Instead of using lxc-br0, I create a br0. I add my eth0 (and in my case other devices) to that br0. Then I configure LXC and LXD to use br0 instead of lxc-br0. I go as far as stopping the lxc-net service, since I’m not using it.

There is one trick if you are going to do that on a remote home system, e.g. I have an old laptop I leave in the basement and I’m really lazy and I don’t want to walk down there and use its console when I screw up its networking. The trick is to make sure eth0 comes up on br0 when its added there.

Before you do anything, make sure bridge-utils is installed. It probably is if you are already using lxc, but if this is a fresh install, you’ll want to apt-get install bridge-utils

Edit your /etc/network/interfaces and disable eth0 by setting it to manual. Add it to br0 by adding a new br0 section and listing eth0 in bridge-ifaces and bridge-ports.

auto br0
iface br0 inet dhcp
    bridge-ifaces eth0
    bridge-ports eth0
    up ifconfig eth0 up

iface eth0 inet manual

Now run sudo ifup br0. At this point something magical happens, the DHCP lease is renewed but this time the IP address is bound to br0. The magical part is that br0 used the eth0 MAC to make the DHCP request and so you get the same IP address in response and even your SSH session stays open. YAY!

wikipedia network bridge

LXC can use any bridge

Now configure LXC to use this bridge.

apt-get install lxc
sed -i 's/lxc.network.link = lxcbr0/lxc.network.link = br0/' /etc/lxc/default.conf

TADA, now any LXC containers you start with lxc-start will use br0 and get an address from your household DHCP server. They will be accessible from any host in your home.

Now what about LXD?

LXD can use any bridge

It turns out, while LXD is a layer on top of LXC, it doesn’t use /etc/lxc/default.conf for its default config, but instead uses its own settings. These are editable with lxc profile edit default. Change the lxcbr0 in your editor and save and exit. You can check that it is correct by using lxc profile show default.

There you have it. LXD instances starting on your local LAN.

Now go read Wayne’s post again and use the Juju LXD provider.

Comparative Risk Analysis in IT Systems

You can quote me on this: “comparative risk analysis is among the most cost effective security measure your org can make. Why lock back door when front is wide open?” https://twitter.com/JayRWren/status/662015840636784640

What do I mean by this?

I mean, there is little point in applying inconsistent security system analysis in your system. The weakest link in a chain fails. When two links in the chain are identified to both be equally weak, nothing can be gained by spending resources (time and money) to improve the strength of only one link.

Let me get specific.

Lets say you have some services in production which interact. Lets call them Service A and B. You are introducing a new service, Service C. At the time of introduction concerns are raised about some security aspects of C.

Let me be clear, these concerns are 100% valid. Let us say for example that Service C is consuming Services of Service A using an overly privileged account rather than a least privilege account. The correct solution is to introduce a lesser privilege account capable of doing only the operations required by Service C.

From a “is it optimally secure” point of view for deploying Service C. That is all.

Rather than this point of view, lets take an overall systems point of view. Service B is using the exact same overly privileged account to perform operations on Service A. Further, the sources of data which Service B is using (publicly exposed https server accepting GET, POST, PUT, etc) are the same or more than Service C.

What is gained by going back, retooling Service A and C to use that lesser privilege account? Well, security of course. C is less vulnerable.

That is true. You’ve locked the back door while the front door is wide open.

Locked

How much did it cost? 80 people hours and the (often difficult to tie to a dollar amount) delay of introducing that much needed Service C.

Was the risk of privilege escalation reduced?

I honestly don’t know.

—————–

  • DuckDuckGo search for comparative risk analysis yields some fun reads.
  • In health, it is like counting calories and eating very well while continuing to abuse elicit drugs. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC390121/
    “I totally would not eat that mcdonalds. It is so gross. Where is my lighter, I need another ciggy.”

Turbo Charging Ubuntu Server and Cloud Images

I do a bit of devops. I get frustrated with waiting even a few seconds longer than I feel like I should. Here is my list of tweaks to Ubuntu Server and Cloud Images.

First, things that make apt-get update fly, because in testing out packages, I find myself running this way too much.

# disable “translations” of Packages index (½ the downloads)
echo Acquire::Languages \"none\"\; | tee /etc/apt/apt.conf.d/02nolang
# disable source repos (½ the downloads)
sed -i -e 's/^deb-src/#deb-src/' /etc/apt/sources.list
# disable universe and multiverse by default
# If it is trusty, you’ll need to
apt-get install software-properties-common
# so that you can...
add-apt-repository -r universe
add-apt-repository -r multiverse
# don’t need 386 packages on amd64
rm -f /etc/dpkg/dpkg.cfg.d/multiarch
sudo dpkg --remove-architecture i386

Next, things which shrink the overall system image. I know it doesn’t seem like much, but I don’t want to wait for updatedb to index unneeded files. I don’t want to wait for xapian on systems on which I never use it. I don’t want to wait for apt-get upgrade to upgrade any of these packages when I don’t use them.

# disable apt-xapian-index - not sure what uses it, I don’t care.
apt-get purge -y apt-xapian-index
# don’t need kernel headers, you won’t compile on your server
apt-get purge -y linux-headers-generic linux-headers-virtual
# if you aren’t using landscape-client you can save 7-10MB:
apt-get purge -y python-twisted-core
# you could go less than ubuntu-minimal by losing x11 stuffs
sudo apt-get purge -y libx11-data xkb-data
# we don’t use interactive commands on our cloud servers - don’t need command not found
apt-get purge -y command-not-found command-not-found-data
# default to —no-install-recommends
echo APT::Install-Recommends \"0\"\;  | sudo tee -a /etc/apt/apt.conf.d/03norecommends
# don’t need ntfs support
sudo apt-get purge -y ntfs-3g

There. Slimmer, tinier images with less updates to download and install.

Juju makes getting to http2 easier

I stumbled upon https://launchpad.net/~ondrej/+archive/ubuntu/apache2 when looking at http2 options for apache and it seemed to me that it might be easy to add support for http2 on any ubuntu system.

I started by forking the apache2 charm, because I just want to play around.

You can try it out:

juju bootstrap
juju deploy cs:~evarlast/trusty/apache2 –to 0
juju set apache2 ssl_cert=SELFSIGNED
juju set apache2 enable_modules=ssl
juju run –service apache2 ‘a2ensite default-ssl’
juju run –service apache2 ‘service apache2 reload’

Now run `juju status` to get the IP address of the node and try it out.

https://10.0.3.1/

Open your browsers dev tools and confirm that http2 was used.

Ubuntu 15.10 brings faster add-apt-repository

Wily Werewolf was released yesterday and with it many new things out of the box.

My favorite feature is something that is silly, simple, a tiny patch, and speeds up something I do often.

I work in the cloud and that means I am deploying new machine images many times a day. Anything to speed this up is something that I want.

In Wily, add-apt-repository now has a -u switch.

-u, –update Update package cache after adding

You’ll notice everywhere on the internet where add-apt-repository is used, the next line of instructions is `apt-get update`. This refreshes the package cache for ALL of the configured apt repositories. On a slow machine with slow IO or slow network, this can take more than just a few seconds, possibly a minute or two. This is too long to wait.

The -u option solves this problem. Not only does is remove the need to `apt-get update` by automatically doing it, but it only fetches the package cache for that newly added repository saving much time.

So anywhere on ask.ubuntu.com or wiki.ubuntu.com where you see:

sudo add-apt-repository FOO
sudo apt-get update
sudo apt-get install BAR

add the -u and remove the update command:

sudo add-apt-repository -u FOO
sudo apt-get install BAR

Revel in the time you save.