Hack My Derby

In 2011, Dave Kennedy and his crew started a little conference, called DerbyCon.  Named for its location, in Louisville, KY, it's an awesome infosec con, with a great atmosphere.  Because of the name, derby hats have been a staple. Much of the staff wear them, and many attendees also wear them.  Many of them are modified, lights, displays, whatever.  This eventually lead to a contest, in 2014 attendees were able to enter their modified derbies in the "Hack Your Derby" contest.

I've been attending DerbyCon from the start. I've only missed one year, and it's probably the one conference I intend to do my best to never miss again.  When we arrived at DerbyCon 4, in 2014, and found out about the Hack Your Derby contest, we immediately wished we'd known sooner.  See, we like to build things, and this was right up our alley.  After seeing the derbies that others brought to the con, the gears in my head immediately started turning.  I hatched an idea.  Let's build a self contained mobile CTF, and put it inside of a derby.  Then bring it to DerbyCon 5.0, call it "Hack My Derby", and enter it in the Hack Your Derby contest.

Well, it turns out that the Hack Your Derby contest didn't happen at 5.0, but we built the derbies anyway.  This post is about how it was done, and how it went, walking around with a CTF on our heads for an entire Security Conference. 

The idea

As these things do, the concept of how this might get pulled off went from a passing thought, through several iterations, and into a more realistic plan over the course of a few months after DerbyCon 4.0.  The original concept was simple.  I wanted a compute platform inside of a derby, with a runtime long enough to get it through a day at the Con.  I wanted to make that compute platform open to attack, and put an external leader board/scoreboard on the front of the hat.  I thought originally about just a single flag, and some app running on an arduino or something, and then just a top-down (most recent -> least recent) list on the display.  That quickly morphed into what we ended up building and bringing to the Con. 

The reality

We ended up settling on a general purpose system, in this case a Raspberry Pi, and instead of a leader board on the hat itself, a web based external scoreboard.  The display on the hat was scaled back, and used as part of the CTF, and a way to advertise what the heck it was.  This gave us a platform that we could run an OS I, and participants, were familiar with.  The Raspberry Pi has a General Purpose IO interface.  Which was able to drive our display.  It is powered from a micro-USB, which is standard for just about any cell phone charge battery, and it has USB peripheral support, so expandability shouldn't be a problem.  We just need to be sure that the devices are compatible with Raspbian. 

The Build

Now, on to the good part.  The build.  I had an original model raspberry pi B sitting around.  So I got to work tinkering with that until I had some familiarity with raspbian and what it could do.  I quickly ran into a wall, and needed to start buying parts.  I went to my boss, explained the project, and my motivations, and asked if the College might be willing to fund the project.  We talked some numbers, and he agreed to fund two derbies, which is exactly what I was pitching for.  So we put our first parts order together.

A new model Raspberry Pi A+, a micro USB hub, and an ourlink USB WiFi adapter. We ordered from Adafruit I must say, I am very pleased with the speed and service of the folks at Adafruit.  Parts would arrive within a day or two, which was nice.

Here's the Pi we ended up with.  I had an earlier model B, and the A+ we bought was about 2/3 the size of the older B (and from what I understand the new B+ as well).  The size reduction turned out to be essential. If we'd gone with the B+, I'm not sure how we would have fit things inside of the derby.

I connected the USB hub (from Newegg) and a USB Ethernet adapter for management.  The A+ does not include an onboard Ethernet adapter. After connecting things, and imaging Raspbian on to a microSD card, I booted things up.

Access

Now that I had a platform, I was able to start building things.  We needed a few things in order to provide an infrastructure.  We needed wireless connectivity, which brings a few requirements.  Transport via a wireless adapter, access via an SSID, and addressing via DHCP.  I started out familiar with all of these technologies, but master of none.  I'm not a network engineer, I'm a Sysadmin.  So there were some challenges there.  luckily, there are a lot of projects which utilize a Raspberry Pi as a wireless router.  So there were plenty of hotwo's available on the subject. I followed this write-up. Making only a few minor changes.  Mainly I switched the mode from WPA to open.  This meant there would be no encryption on the wireless network, but I decided that was OK because of the use case.  I wanted others to be able to join the wireless network without too much trouble.  Getting access to the network was not part of the CTF, rather, it was essential to getting participants access.

I gave DHCPD a boat load of IP address space.  Essentially I gave it an entire /16, which works out to about 65500 ip addresses.  I stole the lowest /24 to use for services, and left the rest for clients. I did this because I had no idea how heavily the CTF was going to get hit, and I wanted to be safe.  This was all non-routable address space, 10.42.x.x, so I figured what's the harm?  

Services Containment

With transport out of the way, I had to start thinking about services.  I had always known that this part was going to be tricky.  Here's the dilemma.  In a full on CTF, you have the ability to physically segregate your services/servers.  Either by having several physical systems, or using virtual machines.  Your network gear is separate from your server gear, your web servers are separate from your DHCP servers, from your shell servers, whatever. The point is, you have a level of segregation that I didn't have at my disposal.  I have a single host, that has to do it all.  The last thing I wanted to see happen, was someone compromising, say, a web application, getting root on my raspberry pi, and running amok.  Ideally, I'd have split my attackable systems off into small VM's, but that's just not easy to do when you have a Raspberry Pi, with 256MB of memory as your host.  I'm not even certain if the ARM CPU in the Pi can even think about virtualization.  I decided that wasn't even worth pursuing.  I could have run my services directly in Raspbian, direct on the host, but I just didn't feel good about that because of the possibility of someone breaking out into Raspbian.

Docker

Enter.. Docker.  Maybe you've heard of it. Docker lets you contain services within SELinux containers.  Your containers run directly on your host, but they're walled off in such a way that if your service is breached, it would take more work to get beyond the container and into your host OS.  This seemed like the perfect solution for my little project.  It removes the overhead of running an entire OS inside of memory (like a VM does) for each service, and it gave me the segregation I needed.  The problem was, I had no idea if you could even run docker on a Pi.  So I did some reading.  I came across this howto. It directs you on how to install docker from some modified source.   I later ran across another method, which seems to have docker ready to go right after the raspbian install, but I didn't try it.  I decided that if at some point I need to rebuild, I'll use that and see how it works.

Once I had docker in place, I started tinkering.  I've only briefly tinkered with Docker, so I had a bit of learning to do.  The first thing I attempted to do was build an openssh service.  After a little toil, I came up with a basic dockerfile that looked something like this:

#FROM debian
FROM resin/rpi-raspbian

RUN apt-get -y update --fix-missing
RUN apt-get -y install openssh-server python
RUN useradd hackme -m -d /home/hackme
RUN usermod -p '$6$Uzc1kGVl$GVAm3QQd5/HZGzIUCiJ6Ukyd37MBawy7FZHybUrAWOYruwlLL0hE1buBDThmR5WA0pz2olbz092COkQDZyr7L0' hackme
RUN mkdir /var/run/sshd

COPY ./runme.sh /home/hackme/runme.sh

EXPOSE 22

CMD ["/usr/sbin/sshd", "-D"]

The first line there, FROM, tells Docker what image to base this container on.  I was able to do some development on my workstation or laptop by simply switching between debian, and raspbian.  On the Pi, I had to use raspbian, because that's built for the CPU arch of the Pi.  The bits in-between are specific to my container, they set up the environment for my flag.  It started out with just openssh, and the useradd. 

This isn't a docker howto, but I'll say that once I got past this first container, I was feeling pretty good.  I thought I might challenge myself.  Basically, every service you run on Linux is contained in a binary.  That's all masked by /etc/init.d scripts.  Those service scripts are intended to make it easy to start/stop a service, and fork it to the background.  Docker changes that.  You're supposed to run a service in the foreground in docker.  So a docker container is basically an image of an OS, containing the bare minimum package set to run a given service, and a command line that runs the service.  So, any service that you can run via a single command, can theoretically be run in docker.  I decided to put that to the test.

I got my start in this industry at a young age.  I'll spare you the childhood memories, but I'll say that as a teenager, I was very into the BBS scene.  I still am at some level.  A big part of the scene were MUDs.  I'm sure I'm not alone.  This industry is full of guys with a similar background to my own. I remembered some MUD software that was open source, and ran on Linux.  CircleMud.  I decided to build a container that would run a MUD, just to see if I could.  And I did.  It was a little tricky.  I ended up building the mud on the host raspbian, then mapping the local files as a volume to docker.  Then docker just ran the script that starts the mud.  I mapped port 4000 on the host, to 4001 on the container, then told circle mud to run on port 4001. 

 

#FROM debian
FROM resin/rpi-raspbian:wheezy

EXPOSE 4001 

CMD ["/bin/bash", "/circle/circle-start"]

That's about as simple as they come.  Adding the volume comes in your docker run command.

In the end, I left this container in place, and added it to the CTF.

I also built an Apache and a Samba container, but again, this isn't a docker howto.  I'm including their dockerfiles just for fun.

Samba:

#FROM debian
FROM resin/rpi-raspbian

RUN apt-get -y update
RUN apt-get -y install samba 

EXPOSE 139

CMD ["/usr/sbin/smbd", "-F", "-S", "-s", "/smb/conf/smb.conf"]

Httpd:

#FROM debian
FROM resin/rpi-raspbian:wheezy

RUN apt-get -y update --fix-missing 
RUN apt-get -y install apache2 python

EXPOSE 80
EXPOSE 443 

CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]

All of my Dockerfile's can be found on github

Display

There is a very long and tedious story here, but I'll spare you.  Let's just say that the OLED display was not exactly easy to get working with the Raspberry Pi.  We decided to go with a 16x2 OLED Character display from Adafruit.  We were lead to believe, from various how-tos and articles we found, that this would not be a difficult fit, however, it turns out that we were mistaken.  Most articles first setup the display for SPI mode, which is not the default configuration of the OLED we purchased, which meant some surface-mount soldering.  Which was a tad tricky.  Then we found that for whatever reason, we couldn't get the display working in SPI mode, so we fell back to parallel mode, which had less support from others, but had the benefit of at least working.  There were still challenges, but within a day of setting the display back to Parallel mode, we were able to get the Pi writing to the display.  Albeit not optimally.  Herein lies a tale of how we ended up utilizing the display.

First was wiring. We bought some jumpers for prototyping, and once we had the pin layout figured out, we soldered up a ribbon cable.  We found that the ribbon cable was a mess, as you can see in the above picture.  So we re-worked the pin layout to use sequential GPIO pins, and cleaned that right up.   We were also then able to fold over the ribbon cable, to save space, and add rigidity.  Once we had things wired up, we needed to get the display to actually display things.  This was where things got a tad hairy.

OLED Pin RasPI Pin Purpose
1 6 Ground
2 4 +5V
4 7 GPIO4
5 8 GPIO14
6 10 GPIO15
11 11 GPIO17
12 12 GPIO18
13 13 GPIO27
14 15 GPIO22

Through our research on how to get the display wired, we were also working with a library that we found via a combination of googling, and forum posts.  We ended up with a partially functional library, for python. This library was great for our testing, but it had a number of limitations.  We couldn't get it to display on the second line of the OLED, we couldn't make use of things like scrolling, and character positioning, a number of other functions that we tried just flat out didn't work, or were buggy.  So we handed it off to our in-house python guy, and asked if he could clean it up, and help us figure out why it wasn't working.  He asked us for the datasheet on the display, and we provided it.  In a day, he came back to us with an entirely re-written library.  Which we will be giving back to the community.  His library fixed all of the issues we were having, and got us up and running.  Now we were free to do lots of neat things with the display, and we had a better understanding of how to use them.  In the end, the new library solved most of our problems, and let me really get down to what I wanted, which was writing a python script to output things to the display for the derby. 

The intention was simple.  I wanted to display to always be active, and have it grabbing people's attention, and directing them to where they could get more information on how to Join in the fun.  I ended up with 4 display modes.  Advertise, recent hosts, PWNED, and Party. 

Advertise was the default mode.  If nothing else was going on, the script would output a few catchy phrases to the display, mostly funny inside jokes from previous DerbyCons.  it also would display the URL of the scoreboard, the SSID of the derby, and the twitter handle of the derby bearer.  If a host joined the SSID of the derby, the hostname would display for 10 seconds, and then fall back to the advertise loop.  If someone took control of the display, their message would trump all, and display for 30 minutes.  Party mode was a timed thing.  In the evening hours the advertise loop was replaced with some custom animations that were intended to get others in the DerbyCon Party spirit. ;) The display library and display loop, are both available on GitHub

Codes and Scores

I wanted to make scores tied to flags as secure as I could, within reason.  My first idea was to make the derby periodically connect to the internet, and send score information back to the scoreboard, but I quickly decided that this was not reasonable.  Most conferences are so flooded with radios, that its practically impossible to get online, and even if you do, it's slow, and in this case, my scoreboard communication could have been intercepted.  Not something I wanted to have to deal with. Besides, it would have required a second radio, or some manner of switching between networks.  It all seemed like too much headache.  I wanted these things to be as foolproof as possible.  So, I decided that codes were the answer.  A flag would grant a code, and then the participant could enter that code on the scoreboard via a form.  This presented a few problems in itself.

Codes needed to be unique, or once someone earned a code, they could share it with others.  I didn't know if anyone would actually do that, but i wanted to try to prevent it. Also, if someone shoulder-surfed a code while someone else was entering their code, i wanted to make it impossible for them to use that code.  There's no way to make this foolproof, but I wanted to handle as many cases as possible.  I also wanted to make sure that the same person couldn't get points for the same flag twice.  So I devised the following code system.

Each code is two parts.  A flag ID, and a random code.  Each derby has a list of several thousand random codes, and all of those random codes are also on the scoreboard.  Each time a code is generated by a flag, and given to the user, the random half of the code is deleted from that derby's database.  Once the code is entered on the scoreboard, the random half of that code is deleted from the scoreboard's database.  This means that there would be some 'wasted' codes on the derbies, but it should mean that no code could be re-used on the scoreboard.  The flag id half of the code is static per flag.  I used md5 come up with codes, usually by inputting some sort of text into md5sum, and cutting out eight characters of the resulting hash.  For the random codes, I used a for loop, in bash, echoing $RANDOM into md5sum, and cutting out 8 characters from each.  Then I threw that through uniq, and got a list of unique random codes. 

Codes ended up looking something like this: 99e00de0-d51bf30c

I then coded up some static codes as well.  One of them went on the public side of my NFC ring, and a few went other places that we needed a code that couldn't be generated on the fly.  Those were not deleted from the DB when they were used on the scoreboard, but the random half of those codes did not exist on the db's on the derbies.  I left some clues that participants might want to ask me about my NFC ring, so they had a chance of finding that code.  One went into an image on one of the derbies, and another was a freebie we threw into our slide deck.

Code Generation

I had to come up with a way to actually grant these codes to participants.  First was a python script that pulled a code from the DB, and output it to STDOUT.  This was the first time I'd worked with Python's database interface, so it was a little bit of a learning experience. Once I had that written, I needed a way to actually get the codes to the participants.  I wanted to isolate the containers from the code generation script, and I didn't want to give the containers direct access to the code DB either.  I needed some way to abstract the inner workings from the containers. 

I ended up using docker volumes, to tie local paths to the docker containers, again.  This time, I made a directory structure on the host that looked like /derby/gate1/, /derby/gate2/, and so forth.  Then I tied those to paths on the containers.  Inside of those directories I made, in most cases, a simple text file that contained a code.  In one case i got fancy, and made it a QR Code.  Then I made my containers read the code files to get their codes.  Now, you may be wondering, how does this solve a unique code per access?  I'm glad you asked.

Inotify to the rescue!  I made a loop, on the host, that used inotifywait to watch the code files.  Whenever anything read the file, the loop would fire off, and a new code would be put in the file within seconds.  I then wrapped the loop in a shell script, and started it up via init.d on the host.  In the end, it looked like this:

#!/bin/bash

GATE="1"
CODEFILE="/derby/gate${GATE}/code"
LOGFILE="/var/log/derby/gate${GATE}code.log"

while true
  do 
    /derby/codes-db.py -g $GATE > $CODEFILE
    inotifywait -e access $CODEFILE >> $LOGFILE 
done &

I made up a different bash script for each gate, then called all of them via a parent script, which is what init.d called.  I was quite proud of this solution.  There are probably a hundred ways to accomplish this, but I think I came up with a decent method.  I think. ;)

Scoreboard

The scoreboard, we wanted to get right.  It needed to look good, be functional, secure, and easy to modify.  With all of that in mind, we decided to let the professionals handle it.  One of the benefits of having this little project backed by the College I work for, was that we have staff that handle things like web development.  So I asked if they might be able to make us a scoreboard.  And sure enough, they did. 

There were some features we'd have liked to have added before the conference, but we ran out of time. Next year we want it to tweet as well.  The scoreboard is still live here.  It's based on a web framework, instead of just written from scratch.  Which means it benefits from a community of developers. 

Compute, in a hat

So, with all of that back end work completed.  We needed to actually fit all of this inside of a derby.  That proved to be slightly tricky.  We originally planned on building a mounting plate for the Pi, and then attaching that to the battery somehow.  Then mounting all of that inside the hat.  It turns out that, even though a derby DOES have a lot of extra room in the top, it does not have as much as you might think. 

Running with this assumption, we ordered some thermal plastic from McMaster-CARR, and started to design a box that the battery would slide into, then the Pi would mount on top of the box.  We mocked this up in Cardboard, and found very quickly that this just wasn't going to work.  There wasn't enough room.  So we ditched the box idea, and started stacking things on top of the battery, the Pi, the USB hub, and then we suspended that inside of the hat.  This worked, but it made the hat sit really high on your head, and it was very top heavy, and even uncomfortable.

Amidst all of this, we started to get the displays mounted.  We decided to put the display on the side of the hat, because the front of the hat had too much contour. 

Placement on the side of the hat was agonizing. I'm a bit of a perfectionist when it comes to external appearance, and I only had one shot to really get this right.  So i must have measured a dozen times before actually cutting.  Then I cut through with an exact-o knife.  The cut came out awesome.  We had originally planned on using that same plastic we'd ordered to make a bezel for the display, and hold the display in place with that.  The cut came out so nice, and the OLED fit so cleanly, that we deiced to just hot glue the display in place.  It looks nice and clean. 

Once the display was in place, we found that the ribbon cable running from the display to the Pi actually added some support to the Pi.  I started playing with placement again, and found that if I put the Pi down one side of the hat, rather than on top of the battery, it got us a lot more head room, and the only part of the Pi that touched your head was the HDMI port.  I ended up hot gluing one of the plastic stand-off's that we bought to mount the Pi to the box (that we ended up not using) to the side of the hat, and then using it to hold the Pi steady.  Now we had the Pi mounted, and the USB hub fit perfectly in the space above the battery in the dome of the hat.

To suspend the battery, we took some thick wire, and bent it to match the inner curve of the top of the hat.  Then lashed some garment hooks to the wire, and hot glued the ring in place.  This, by pure luck, made a perfect friction-fit for the battery.  Then we laced fishing line, with a hook on the end, through the garment hooks, and hooked them in place.  This turned out to be a very clean mount, and made the battery removable.

And the finished product:

DerbyCon Presentation

(Longer) Bsides Delaware Presentation