The 2014 MIT Mystery Hunt – Alice Shrugged


Winning the hunt in 2013

Winning the hunt in 2013


Calling the winning team in 2014

Calling the winning team in 2014

So we ran the MIT Mystery Hunt this year (our dubious award for winning it last year). The experience is pretty well bookended by the above two pictures: one of Laura answering our phone in 2013 to hear that we had answered the final meta successfully and won, and another one of Laura calling the winning team in 2014 (One fish, two fish, random fish, blue fish) to congratulate them on answering the final meta successfully. I have no idea how or where to begin describing what it was like to do this this year. My team ran the hunt in 2004, but I was out of town in Champaign, IL at the time and played no part in that little misadventure. This time around, I was on the leadership committee, in charge of hunt systems, IT and infrastructure.

Thank You

First I’d like to thank the other members of the systems team. James Clark just about single-handedly wrote a new django app and framework (based loosely on techniques and code from the 2012 codex hunt) which we will be putting up on github soonish and we hope that other teams can make use of it in the future — we have dubbed it “spoilr”. It worked remarkably well, and has several innovative features that I think will serve hunt well for years to come. Joel Miller and Alejandro Sedeño were my on-site server admin helpers and helped keep things running and further adjusted code (although only slightly) during the hunt. Josh Randall was our veteran team member on call in England (which helped because he was available during shifted hours for us). And Matt Goldstein set up our HQ call center and auto-dial app with VoIP phones provided by IS&T.

With the exception of only a few issues (which I’ll try to address below), from the systems side of things the hunt ran extremely well. We were the first hunt in a while to actually start on time, and we were also the first hunt in a while to actually have the solutions and hunt solve statistics up and posted by wrap-up on Monday. This hunt had a record number of participants, and a record number of teams, (both higher than we planned when designing and testing the system) making our job all the more difficult. And of course, I’d like to join everyone else in thanking the entire team of Alice Shrugged that made this hunt possible. It was great working with you all year and pulling off what many feel was a fantastic hunt.

Hunt Archive and Favorites

To look at the actual hunt, including all puzzles and solutions, and some team and hunt statistics, go to the 2014 Hunt Archive. My favorite puzzles (since everyone seems to be asking) were: Callooh Callay World and Crow Facts. Okay, I guess Stalk Us Maybe was pretty neat too.

Apologies

First of all, I’d like to apologize on behalf of the systems team for the issues with Walk Across Some Dungeons. This performed artificially well during our load test, but load test clients are far more well behaved than actual real people on a real network. There were several socket locking and connection starvation issues with the puzzle even after we spent all day (and night) Friday parallelizing it onto as many as 7 virtual servers. Eventually we patched the code to allow for better handling of dropped connections and to be more multi-threaded within each app instance and by late Saturday night it was working much better. The author has since ported it to javascript, and it should work fine well into the future. Lesson to future teams: don’t try to write your own socket-handling code or any server-side code for that matter that has to interact with the hunters. The issues surrounding the puzzle (lag, and the frequent reset back to stage 1 requiring many clicks to get back to your position) affected every team equally for the first day and were not fixed until at least the top 5 teams had already solved the puzzle in its “difficult” state. So luckily this was not a fairness issue, it just made a puzzle a LOT harder than it should have been.

Second, there was an interesting issue with our hunt republishing code. At times during the hunt there were errata (remarkably few, actually) and some point-value and unlocking-system changes (will mention more about this below) that required a full republish of the hunt for all teams. This is not unusual. However, with the number of teams and hunters and the pace our call handlers (particularly Zoz the queue-handling machine) were progressing teams through the hunt on Friday in particular, this created a race condition. If any puzzle unlocks happened during one of these republishes, they would be put back to the state they were when the publish started. Since a republish takes a looooooong time for all of these teams and puzzles, a number of teams noticed “disappearing” puzzles and unlocks on Friday while we were updating our first erratas in puzzles and then later Friday night when we changed the value of Wonderland train tickets to slow the hunt down a bit. We alleviated this slightly in the spoilr code by making the republish iterate team by team rather than take its state of the whole hunt at the beginning and then apply it to everyone. By later on Friday though, teams had enough puzzles unlocked that even just republishing for a team had a risk of coinciding with a puzzle unlock, so we simply froze the handling of the call-in queue while we were doing these. As a note for future teams, this could probably be fixed by making the republish work more transactional in the code.

Release Rate, Fairness, and Fun

On this subject I can not pretend to speak for the whole team (nor can anyone probably), but I will share what I experienced and what I think about it. Many medium and small-sized teams have written to congratulate us on running a hunt that was fun for them and that encouraged teams to keep hunting in some cases over 24 hours after the coin was found. On the flip side some medium and large-sized teams were a bit disappointed in the later stages of the hunt when puzzles unlocked at a slower rate (particularly once all rounds were unlocked) leaving them with less puzzles to work on and creating bottlenecks to finishing the hunt. One of the overriding principles of us writing this hunt was to make it fun for small teams, and fair for large teams. The puzzle release mechanism in the MIT round(s) was fast, furious and fun. Something like 30 teams actually solved the MIT Meta and got to go on the MIT runaround and get the “mid-hunt” reward. From the beginning of our design, the puzzle release mechanism for the wonderland rounds (particularly the outer ones) was constrained to release puzzles in an already-opened round based only on correct answers in that round. The rate of how many answers in a round it took to open up the next set of puzzles in that round, and the order in which puzzles were released in a given round was designed to require focused effort on a smaller number of opened puzzles in order to progress to a point where those metas were solvable. This rate was, incidentally, tweaked to be somewhat lower on Friday night (but only for the two rounds no team had opened yet) in a concerted effort to make sure the coin wasn’t found as early as 6-8pm on Saturday. Coming from a large team myself, I have seen the effect of the explosion of team size on the dynamics of Mystery Hunt. This is an issue that teams will face for years to come, and everyone may choose to solve it a different way. But once again, our overriding goal was to make the hunt fun for small teams, and fair for large teams, and I think we did just that.

Architecture Overview

For the curious, and to those running the hunt next year, our server setup was fairly simple. We had one backend server which ran a database and all of the queue-handling and hunt HQ parts of the spoilr software (in django via mod_wsgi and apache). There were two frontend servers which shared a common filesystem mount with the backend server so all teams saw the consistent view of unlocks. Each team gets its own login and home directory which controls their view of the hunt when the spoilr software updates symlinks or changes the HTML files there. The spoilr software on the frontends handled answer submissions and contact HQ requests among some other things, but they were mostly just static web servers. We didn’t need two for load reasons, we just had both running for redundancy in case one pooped out over the weekend. However, splitting the dynamic queue operations and Hunt HQ dashboards off from the web servers that 1500+ hunters were hitting for the hunt was a necessity. Each of the front ends also acted as a full streaming replica of the database on the backend server, and we had a failover script ready so the hunt could continue even if the backend server and database failed somehow. There was also a streaming database replica and hunt server in another colocation facility in Chicago in case somehow both datacenters that the hunt servers were in failed or lost internet connectivity. I’d like to thank Axcelx Technologies for providing us with hosting and support, and would recommend them to anyone looking for a reasonably priced virtual server provider or collocation provider.

As far as writing the hunt goes, we used the now-standard “puzzletron” software and made a lot of improvements to that and hope to get that pushed back up to gitweb for the next team to start writing with. We had dev and test instances of puzzletron running all year so we could deploy our new features quickly and safely as our team came up with neat new things to track with it. Beyond that, we set up a mediawiki wiki, and a phpbb bulletin board, as well as several mailman mailing lists and a jabber chat server (which nobody really used). As a large team, collaboration tools have always been very important for us in trying to win the hunt, and were even more important in writing it. In retrospect, we probably should have taken more time to develop an actual electronic ticketing system (or find one to use) for the run-time operations of the hunt. Instead we ended up using paper tickets which passed back and forth between characters, queue handlers, and the run-time people. Since this hunt had so many interactions and so many teams which needed to get through them, this got clumsy and some were dropped or not checked off early in the hunt (I’m very sorry if this happened to any teams and delayed unlocks of puzzles/rounds early on).

In Closing

In closing, I had a great time working on the hunt. I can’t say how great it would have been to go on it, since sadly I did not get to. But, hearing the generally positive comments from everyone thus far, I’m glad we didn’t screw it up :) The mailing list aliceshrugged@aliceshrugged.com will continue to work into the future, and I look forward to getting some of our code and documentation posted up for random to perhaps use and further improve upon next year, and for other teams to carry on the tradition for many years to come.

  1. #1 by - on January 26, 2014 - 1:05 pm

    Thanks for the insights into the tech that goes into a hunt like this – fascinating reading, and kudos for putting on such a smooth hunt despite so many variables and moving parts.
    I’m curious, when you say “fun for small and fair for large” – what kind of size ranges are you actually taking about? For example, a 40 person team seems large to me, but I realize you have a very different perspective as a hunt runner, so I wonder where you draw the lines.

    • #2 by benoc on January 26, 2014 - 1:08 pm

      I think everyone on the team drew that line somewhere differently. I personally drew it at somewhere around 80-100 people as being a “large” team, but our team is regularly 150+. Incidentally, now that hunt is over, I think that for medium-sized teams it would have been good to have another “achievement” after the wonderland hole rounds were solved now that we know the last three wonderland center rounds ended up being so bottlenecked.

  2. #3 by aerionblue on January 27, 2014 - 1:17 am

    We had a phpbb board?? That might be the one thing that was used less than the Jabber chat.

    • #4 by benoc on January 27, 2014 - 12:16 pm

      Oh we used it a lot in the early discussions on theme and structure. It’s a much better platform than email threads for actually having a debate / exchange of ideas. But once the potential themes were all fleshed out and voted on nobody used it for anything else.

  3. #5 by Andrew Lin on January 30, 2014 - 2:35 am

    based loosely on techniques and code from the 2012 codex hunt

    Praise van Rossum and release the global interpreter lock! Glimmers of code reuse! ;-)

    But seriously, Codexians have been wondering if and when there would start to be a reusable code base for Hunt serving. I know in 2011, 2012, and 2013 writing brand-new hunt serving systems consumed huge amounts of each tech team’s time. (I was, blessedly, spared this task, if only because I was scrambling to get the last dozen puzzles through the pipeline…)

    I look forward to seeing what pieces of code you want to put up. If you want access to the Github account we made for the hunt (https://github.com/mysteryhunt), I can put you in touch with the right people.

    • #6 by benoc on February 1, 2014 - 1:41 pm

      All of code is now up there as of today. The spoilr django app, and the entire /home/hunt directory from this year’s hunt (with team data scrubbed but sample team data files present). Should be enough for anyone to set up a running version of the 2014 hunt. Feel free to let me know if you have any questions or problems setting it up.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: