Archive for category life
I get questions.
My wife and I are in the middle of remodeling a 19th century victorian in Cambridge, MA (see http://rebuildingwheneverland.wordpress.com for that story). Folks ask things like “where did you two meet?”, “where did you learn to use all of these tools and do all of this building stuff?”, “what was life like at MIT?”
I am a putzen. MIT class of 2000. I lived at East Campus, specifically the hall known as “putz”, “PTZ”, “πTZ”, or Second West and this is an attempt to explain what that means.
Here’s a panoramic picture of H204, the room I lived in during most of college. This picture was taken the year after I graduated, after I had passed it on to the esteemed Mr. Ryan Williams (a.k.a. “breath”):
On August 21st 1996, I arrived from the snowbound hellscape that was my childhood home of Rochester, New York and walked into the student center at MIT in Cambridge, MA. Back then when you got to MIT you didn’t know where you were going to live and classes didn’t start for another two weeks. This was a wonderful time called “R/O” or Campus Rush / Orientation. All of the undergraduate dorms and frats (yes you could move right into a frat then as a freshman at MIT — this was before some unfortunate idiot drank himself to death in 1997 and ruined everything for everyone else) competed for the freshmen and we got to tour around, get free food, and pick where we’d live.
I had a “temp” room in a nice clean, ordinary boring dorm about a 20 minute walk from main campus. I immediately fell in love with a rough-and-tumble, less maintained, cheaper-to-live-in, dorm that was closer to classes called “MIT East Campus Alumni Memorial Housing” but more generally known as East Campus, or just EC. However, it was not my first choice for the housing lottery (or even my second). I didn’t think I was weird or quirky enough to live there, and honestly it’s a little bit intimidating for a 17 year old kid from the suburbs, not to mention a little bit dirty and grubby.
They build roller coasters in the courtyard. The walls are covered with hand-painted murals done by residents every year. Two of the ten “halls” allowed smoking (and still do). Several even let students bring cats! Students regularly build elaborate “lofts” in their room out of scrap wood, allowing them to sleep on an elevated platform while having more studying and living space underneath. They blow stuff up in the dorm courtyard on a semi-regular basis.
Here’s the EC “dorm rush” video from last year. Please note that pretty much none of the stuff in this video was done just for the video. This was filming of regular dorm activities like campus rush, spring picnic, fred fest, and regular everyday merriment:
Here’s an older one featuring an actual roller coaster and showing that really not much has changed in the past 10 years:
Fate ended up smiling upon me and I ended up getting assigned to East Campus in the housing lottery. A few days later, in the courtyard as all of us new residents were gathering around on the benches and tables out there, a group of 5 or 6 of us coalesced and started talking. We would all end up becoming friends for the next four years (some of us for life), and one of them would eventually become my wife.
But hang on, I haven’t even gotten to putz yet. For rush at MIT not only did we get to pick what dorm we lived in, once we got to EC we got to (and this is still the case) pick what hall we wanted to live in. This means that each of the 10 halls ends up with its own unique personality. As I mentioned earlier, some allow smoking, some allow cats. Some are quiet, some are more mechanically or technically oriented, some are more into sports (intramural or otherwise), etc. etc. East Campus is a dorm with two separate parallel buildings (east and west) with 5 floors each. Thus the floors are designated with the numbers 1-5 (by floor) and East or West (based on what parallel the floor is in).
I chose and ended up on Second West. For rush every year they line ⅓ of the hall with a tarp, put a bunch of mattresses on the end, get out a hose and some soap and do an indoor slip-n-slide. They did this 20 years ago when I was a wee frosh and they still do it today. Some traditions run deep. Second West is also known affectionately by its nickname “putz” or πtz (story of this nickname follows later).
Every year for the past 23 years (at least) on the Saturday or Sunday of the weekend before thanksgiving on 2nd West, something special happens. It’s called “putzgiving” and I’ve been there every year for the last 11 of them (while I’ve been back living in Boston). For the past 5 years I’ve made a turkey for it and helped to feed the roughly 60-80 current residents and alums that come back every year. It’s a fabulous delicious feast of food, camaraderie, and reminiscing. For the putzgiving celebrating the somewhat arbitrary “20th year of putz” back in 2012, Mark Feldmeier, one of the older alums (even older than me) gave a brief speech marking the occasion and touching a little bit upon what makes putz “different”:
Putz isn’t a frat, but it bears some communal characteristics similar to one. In fact, in the early days, the letters πTZ were put on signs and t-shirts to make fun of fraternities. A regular pastime of the hall during old-time frat rush was to go out and steal the signs of real fraternities and paint over them with πTZ in large letters. But as the Big Chicken (Mark) says in that video, over the past 20 years things have changed a lot but the community of putz remains. There’s a longevity to it, and it’s remarkably cool that people come back for events, get together out of town, form business together, write songs, go to each others weddings, or even get married and have children themselves.
What else is there to say?
Always mechanically and/or computationally oriented, putz has built robots, constructed many in-room lofts of varying construction quality, and in one case even chopped a huge hole in a room wall to put in a fish tank (which I think might still be there). In 1996 the oldest continually operating webcam sprung to life (https://intotheweeds.org/2012/11/22/a-look-back-in-time-golden-age-of-the-internet/). In 2000 a large music storage and playing linux webserver and sound system was built to allow for remote queueing, organization, and streaming of music into the hall’s lounges and bathrooms. In 2005, the Time Travelers Convention was held and even got mocked on SNL by Tina Fey. In both 2004 and 2014 Putz/East Campus teams won the annual MIT Mystery Hunt. Putz alums have been featured on Jeopardy (unfortunately losing) and Battlebots (also, unfortunately, losing) over the years as well. We also have putzen who have proudly served in the military, in the priesthood, and just about every field in between.
I’ll close this post with a bit of trivia. A message from Abe Farag explaining a bit about the origin and history of πTZ:
“…The 1994 class of 2nd west attracted some calm academic kids- Physics majors: Gunther, Rob, Joe, David C – who moved floors I think? (Oh my this was a long time ago)
But somehow in the 1995 class 2nd west attracted a cast of characters with some Spunk. I don’t know how that happened. Like the cosmos forming new stars out of the either. Maybe a masterpiece is easier to create on a blank palette then one already crowded with some message.
The 1996 rush was even fun & then with the 1996 class a flood gate of fun rolled in. I have no idea how “PTZ” started. I did play a lot of hack hockey & did poke fun a lot of our D-level game. I’m honored to think I might have started PTZ- but I don’t think it was me.
If I called us Putzes it was just to be silly. Not to start a club.
In early 1992 Gunther made a hall t-shirt that said “We’d make bad elves” which was a drawing of an elv that had built a 3 armed doll & santa hitting him. I think of that as a precursor of the feeling of us being Putzes if not the exact name. We were so lame that we made t-shirts of our IM teams that made fun of how lame we were. But I’m sure that Mark yelling was a big part of the ethos of Putz. YO Mark.
However the name bubbled up I think PTZ stuck because of the fun link to pseudo Frat & rush & because the feeling of not having an identity at the time – of being an outcast floor- which we were at the time.
I also think it was 1993 Rush that some Brave & BOLD Putzen went to the great Killian dome RUSH kickoff event with a PTZ sign & PTZ t-shirts & like pied pipers lead some incoming kids to ec. I think 1 tenet of PTZ is that Making Physical stuff leads to fun. That was a Pivotal moment for PTZ.
To all Yee Putz… enjoy the great legacy you are part of.
GO forth & Putz.
UPDATE: Here is a composite picture that tells part of the story. Click for the full-size image. On the left is from last weekend (February 7th, 4:11pm). On the right is this morning (February 15th, 11:15am). The walls and mountains of snow just keep growing. This includes another 14″ or so that has been added in the past 24 hours (after writing the blog post below).
Please excuse the pun in the title (or don’t). If you haven’t heard, Boston has gotten an unprecedented amount of snow over the past three weeks and will probably end up with about 70″ within a 30-day period.
I come from Rochester, New York. Depending on your statistics, it is considered one of the snowiest cities in the U.S (and sometimes is #1 on that list). So my thought has been: have I just been living outside of the snow belt for so long that what used to not be a big deal in my normal winter experience is now completely bizarre to me?
Let’s start by looking at the current leaders of this season’s “snow bowl“. Boston is right there in between Buffalo, Syracuse and Rochester, and is behind the total amount of snow that snow-belt cities Buffalo and Syracuse had at this point last year, and about equal with Rochester. So yes, this is an unbelievably large amount of snow for Boston, but not unprecedented for some urban areas.
The statements so far from the mayor and officials from the local mass transit authority (MBTA) indicate that the real issue here has been an inability to clear out snow from streets, train tracks, and to maintain equipment which the snow and cold has damaged. Mass transit is completely shut down tomorrow again, and a snow emergency continues. This is a good move, in my opinion, because we need an extra day of cars and people not being out there so that the city can do something about removing the mountains of snow.
Re-opening the city so soon after the first storm two weeks ago was, in retrospect, what got us into the current mess. You can only shove so much snow aside before there’s just too much of it and the streets narrow to be impassable. This is precisely the situation we were in here for most of last week. Traffic was at a standstill as two-lane streets became one (or even 1/2 lane) with parked cars and mountains of snow competing for road space. Lets hope the crews make some good progress cleaning up the streets and sidewalks tomorrow while we’re all home from work. I also hope that the embattled MBTA can get its act together. However the lack of budgetary attention paid to that agency from the state over the past few years, and downright animosity from residents of the western suburban and rural part of the state when faced with their tax dollars paying for “city” infrastructure has left the agency in a spot where they just don’t have enough resources to remove all of the snow and repair/maintain equipment. By some estimates last week about half of the trains on certain subway lines were out of service due to weather-related malfunction.
So, good luck to them. And they should get a move-on because the current NOAA forecast discussion indicates a good chance of a potentially significant storm this coming Thursday, and another one Saturday into Sunday.
Now, back to my original question about 70″ in a month. I looked around for record of this happening in a city before (particuarly a major city, like Boston). I found a few of similar incidents in smaller cities though — all in the “snow belt” region of Buffalo-Rochester-Syracuse, of course. Back in 1985, Buffalo had 68 inches of snow in December. That’s less than we’re facing here now, and in a city that’s smaller and probably has a much easier time with snow removal and street clearing (not to mention no large subway/trolley system to also keep clear). In December 2001, the city of Buffalo had a record 83 inches of snowfall, with a maximum of 44 inches on the ground at one point. That seems to indicate to me that there was some sort of thawing in between — a luxury we have not had here in the city of Boston over the past three weeks. In Syracuse, however, there was also a 64 inch December, and supposedly a 97 inch January back in 1966 that I really want to find and read some more about. But other than those few anomalies? I couldn’t find a month of more than 52 inches of snow for Buffalo, Rochester, or Syracuse.
So yeah, 70″ in a less than a month is very very rare for *any* place — even the snowiest cities in this country that deal with blizzards regularly. It’s certainly unprecedented for a major city with a multi-mode mass-transit system and a population over 640,000 (4.5 million in the “greater boston” MSA). In other words, I haven’t gone soft. This really is a whole lot of snow.
From best to worst:
Agree? Disagree? Discuss!
So, for those who haven’t been privy to the news, Kristy and I bought a house. Rather than clog up this blog with that stuff though, I’ve started a new one:
First post is up with the basic story and some “before” pictures. We close officially tomorrow afternoon, and moving day is September 23rd. As you can see if you look at that blog and gallery, we’ve got a lot of work ahead of us!
“So what exactly would you say you do here?”
I’ve flown out to remote locations and been on-site for the build-out and spin-up of three new production data centers within the last 10 months. I’ve been present for load tests and at public launches of new video games’ online services and product and feature launches to predict and solve system load issues from rushes of new customers hitting new code, networks, and servers. And yes, I’ve spent my share of all-nighters in war rooms and in server rooms troubleshooting incidents and complicated failure events that have taken parts of web sites, or entire online properties offline. I wasn’t personally involved in fixing healthcare.gov late last year, but that team was made up of people I would consider my peers, and some people that have specifically have been co-workers of mine in the past.
Do you use the internet? Ever buy anything online? Use Facebook? Have a Netflix account? Ever do a search on Duck Duck Go or use Google? Do you have a bank account? Do you have a criminal record, or not? Ever been pulled over? Have you made a phone call in the past 10 years? Is your metadata being collected by the NSA? Have you ever been to the hospital, doctor’s office, or pharmacy? Do you play video games? If you’ve answered yes to any of the above questions, then a portion of your life (and livelihood) depend on a particular group of professional engineers that do what I do. No, we are not a secret society of illuminati or lizard people. We do, however, work mostly in the background, away from the spotlight, and ensure the correct operation of many parts of our modern, digital world.
So what do we call ourselves? That’s often the first challenge I face when someone asks me what I do for a living. My job titles, and the titles of my peers, have changed over the years. Some of us were called “operators” back in the early days of room-sized computers and massive tape drives. When I graduated college and got my first job I was referred to as a “systems administrator” or “sysadmin” for short. These days, the skill sets required to keep increasingly varied and complex digital infrastructure functioning properly have become specialized enough that this is almost universally considered a distinct field of engineering rather than just “administration” or “operations”. We often refer to ourselves now as “systems engineers,” “systems architects,” “production engineers,” or to use a term coined at Google but now used more widely, “site reliability engineers.”
What does my job entail specifically? There are scripting languages, automated configuration and server deployment packages, common technology standards, and large amounts of monitoring and metrics feedback from the complex systems that we create and work on. These are the tools we need to scale to handle growing populations of customers and increased traffic every day. This is a somewhat unique skill set and engineering field. Many of us have computer science degrees (I happen to), but many of us don’t. Most of the skills and techniques I use to do my job were not learned in school, but through my years of experience and an informal system of mentorship and apprenticeship in this odd guild. I wouldn’t consider myself a software engineer, but I know how to program in several languages. I didn’t write any of the code or design any of our website, but my team and teams like it are responsible for deploying that code and services, monitoring function, making sure the underlying servers, network and operating systems function properly, and maintaining operations through growth and evolution of the product, spikes in traffic, and any other unusual things.
Back in 2001, I was working for the University of Illiniois at Urbana-Champaign for the campus information services department (then known as CCSO) as a primary engineer of the campus email and file storage systems. Both were rather large by 2001 standards, with over 60,000 accounts and about a terabyte (omg a whole terabyte!) of storage. This was still in the early part of the exponential growth of the internet and digital services. I remember a presentation by Sun Microsystems in which they stated that given the current growth rates and server/admin ratios, by 2015 about ⅓ of the U.S. Population would need to be sysadmins of some sort. They were probably right, but the good news is that since then our job has shifted mostly to finding efficiencies and making the management of systems and services of ever-growing scales and complexity possible without actual manual administration or operation — so the server/admin ratio has gone down dramatically since then. Back then it was around 1 admin for every 25 servers in an academic environment like UIUC. Today, the common ratios in industry range from a few hundred to a few thousand servers per engineer. I don’t think I’m allowed to say publicly what the specific numbers are here at TripAdvisor, but it is within that range. But, we still need new engineers every day to meet needs as the internet scales, and as we need to find even more efficiencies to continue to crank that ratio up.
Where do the production operations engineers come from? Many of us are ex-military, went to trade schools, or came to the career through a desire to tinker unrelated to college training. As I stated earlier, while a degree in computer science helps a lot understanding the foundations of what I do, many of the best engineers I’ve had the pleasure of working with are art, philosophy, or rhetoric majors. In hiring, we look for people who have strong problem solving desires and abilities, people who handle pressure well, who sometimes like to break things or take them apart to see how they work, and people who are flexible and open to changing requirements and environments. I believe that, because for a while computers just “worked” for people, a whole generation of young people in college, or just graduating college, never had the need or interest to look under the hood at how systems and networks work. In contrast, while I was in college, we had to compile our own linux kernels to get video support working, and do endless troubleshooting on our own computers just to make them usable for coding and, in some cases, daily operation on the campus network.
So generally speaking, recent college graduates trained in computer science have tended to gravitate towards the more “glamorous” software engineering and design positions, and continue to. How do we attract more interest in our open positions, and in the career as an option as early as college? I don’t have a good answer for that. I’ve asked my peers, and many of them don’t know either. I was thrilled to go to the 2014 SREcon in Santa Clara earlier this month (https://www.usenix.org/conference/srecon14), and for the most part the discussion panels there and the engineers and managers there from all the big Silicon Valley outfits (Facebook, Google, Twitter, Dropbox, etc.) face the same problem. It’s admittedly even worse for us at TripAdvisor as an east coast company fighting against the inexorable pull of Silicon Valley on the talent pool here.
One thing I’ve come to strongly believe, and which I think is becoming the norm in industry operations groups, is that we need to broaden our hiring windows more. We need to attract young talent and bring in the young engineers, who may not even be strictly sure that they want an operations or devops career, and show them how awesome and cool it really is (ok, at least I think it is). To this end, I gave a talk at MIT a little over a year ago on this subject — check out the slides and notes here. I didn’t know that this is what I wanted to do for sure until about a week before I graduated from MIT in 2000. I had two post-graduation job offers on the table, and I chose a position as an entry-level UNIX systems administrator at Massachusetts General Hospital (radiation oncology department, to be more specific) over the higher paying Java software engineering job at some outfit named Lava Stream (which as far as I can tell does not exist anymore). Turns out I made the right decision. The rest of my career history is in my LinkedIn profile (https://www.linkedin.com/profile/view?id=8091411) if anyone is curious. No, I’m not looking for a new job.
“Now (and forever) Hiring”
So, if anyone reading this is entering college, or just leaving college, or thinking of a career change, give operations some consideration. Maybe teach yourself some Linux skills. Take some online classes if you have time or think you need to. Brush up your python and shell scripting skills. At least become a hobbyist at home and figure out some of those skills you see in our open job positions (nagios, Apache, puppet, Hadoop, redis, whatever). Who knows, you might like it, and find yourself in a career where recruiters call you every other day and you can pretty much name your own salary and company you want to work for.
And specifically for my group at TripAdvisor? We manage the world’s largest travel site’s production infrastructure. It’s a fast-moving speed-wins type of place (see my previous blog post) and we are hiring. Any of this sound interesting to you? Even if you don’t think you fit any of the descriptions below but might be up for some mentoring/training and maybe an internship or more entry-level position, tweet at me or drop me an email and we’ll see what we can do. See you out there on the internets.
Job Opening: Technical Operations Engineer TripAdvisor is seeking a senior-level production operations engineer to join our technical operations team. The primary focus of the technical operations team is the build-out and ongoing management of Tripadvisor’s production systems and infrastructure. You will be designing, implementing, maintaining, and troubleshooting systems that run the world's largest travel site across several datacenters and continents. TripAdvisor is a very fast growing and innovative site, and our technical operations engineers require the flexibility, and knowledge to adapt to and respond to challenging and novel situations every day. A successful candidate for this role must have strong system and network troubleshooting skills, a desire for automation, and a willingness to tackle problems quickly and at scale all the way from the hardware and kernel level, up the stack to our database, backend, web services and code. Some Responsibilities: - Monitoring/trending of production systems and network - General linux systems administration - Troubleshooting performance issues - DNS and Authentication administration - Datacenter, network build-outs to support continued growth - Network management and administration - Part of a 24x7 emergency response team Some Desired Qualifications: - Deep knowledge of Linux - Experienced in use of scripting and programming languages - Experience with high traffic, internet-facing services - Experience with alerting and trending packages like Nagios, Cacti - Experience with environment automation tools (puppet, kickstart, etc.) - Experience with virtualization technology (KVM preferred) - Experience with network switches, routers and firewalls
Job Opening: Information Security Engineer TripAdvisor is seeking an Information Security Engineer to join our operations team. You will be charged with the responsibilities for overall information security for all the systems powering our sites, the information workflow for the sites and operational procedures, as well as the access of information from offices and remotes work locations. Do you have the talent to not only design, but actually implement and potentially automate firewall, IDS/IPS configuration changes and manage day-to-day operations? Can you implement and manage vulnerability scans, penetration tests and audit security infrastructure? You will be collaborating with product owners, product engineers, operations engineers to understand business priorities and goals, company culture, development processes, operational processes to identify risks and then work with teams on designing and implementing solutions or mitigations. You will be the information security expert in the company that track and monitor new/emerging vulnerabilities, exploitation techniques and attack vectors, as well as evaluate their impacts on services in production and under development. You will provide support for audit and remediation activities. You will be working hands-on on our production systems and network equipment to enact policy and maintain a secure and scalable environment. Desired Skills and Experience * BSc or higher degree in Computing Science or equivalent desired * Relevant work experience (10+ yrs) in securing systemsand infrastructure * Prior experience in penetration testing, vulnerability management, forensics * Require prior experience in the area of IDS/IPS, firewall config/management * Experience with high traffic, Internet-facing services * Ability to understand and integrate business drivers and priorities into design * Strong problem solving and analytical skills * Strong communication skills with both product management and engineering * Familiar with OWASP Top-10 * Relevant certifications (CISSP, GIAC Gold/Platinum, and CISM) a plus
So we ran the MIT Mystery Hunt this year (our dubious award for winning it last year). The experience is pretty well bookended by the above two pictures: one of Laura answering our phone in 2013 to hear that we had answered the final meta successfully and won, and another one of Laura calling the winning team in 2014 (One fish, two fish, random fish, blue fish) to congratulate them on answering the final meta successfully. I have no idea how or where to begin describing what it was like to do this this year. My team ran the hunt in 2004, but I was out of town in Champaign, IL at the time and played no part in that little misadventure. This time around, I was on the leadership committee, in charge of hunt systems, IT and infrastructure.
First I’d like to thank the other members of the systems team. James Clark just about single-handedly wrote a new django app and framework (based loosely on techniques and code from the 2012 codex hunt) which we will be putting up on github soonish and we hope that other teams can make use of it in the future — we have dubbed it “spoilr”. It worked remarkably well, and has several innovative features that I think will serve hunt well for years to come. Joel Miller and Alejandro Sedeño were my on-site server admin helpers and helped keep things running and further adjusted code (although only slightly) during the hunt. Josh Randall was our veteran team member on call in England (which helped because he was available during shifted hours for us). And Matt Goldstein set up our HQ call center and auto-dial app with VoIP phones provided by IS&T.
With the exception of only a few issues (which I’ll try to address below), from the systems side of things the hunt ran extremely well. We were the first hunt in a while to actually start on time, and we were also the first hunt in a while to actually have the solutions and hunt solve statistics up and posted by wrap-up on Monday. This hunt had a record number of participants, and a record number of teams, (both higher than we planned when designing and testing the system) making our job all the more difficult. And of course, I’d like to join everyone else in thanking the entire team of Alice Shrugged that made this hunt possible. It was great working with you all year and pulling off what many feel was a fantastic hunt.
Hunt Archive and Favorites
To look at the actual hunt, including all puzzles and solutions, and some team and hunt statistics, go to the 2014 Hunt Archive. My favorite puzzles (since everyone seems to be asking) were: Callooh Callay World and Crow Facts. Okay, I guess Stalk Us Maybe was pretty neat too.
Second, there was an interesting issue with our hunt republishing code. At times during the hunt there were errata (remarkably few, actually) and some point-value and unlocking-system changes (will mention more about this below) that required a full republish of the hunt for all teams. This is not unusual. However, with the number of teams and hunters and the pace our call handlers (particularly Zoz the queue-handling machine) were progressing teams through the hunt on Friday in particular, this created a race condition. If any puzzle unlocks happened during one of these republishes, they would be put back to the state they were when the publish started. Since a republish takes a looooooong time for all of these teams and puzzles, a number of teams noticed “disappearing” puzzles and unlocks on Friday while we were updating our first erratas in puzzles and then later Friday night when we changed the value of Wonderland train tickets to slow the hunt down a bit. We alleviated this slightly in the spoilr code by making the republish iterate team by team rather than take its state of the whole hunt at the beginning and then apply it to everyone. By later on Friday though, teams had enough puzzles unlocked that even just republishing for a team had a risk of coinciding with a puzzle unlock, so we simply froze the handling of the call-in queue while we were doing these. As a note for future teams, this could probably be fixed by making the republish work more transactional in the code.
Release Rate, Fairness, and Fun
On this subject I can not pretend to speak for the whole team (nor can anyone probably), but I will share what I experienced and what I think about it. Many medium and small-sized teams have written to congratulate us on running a hunt that was fun for them and that encouraged teams to keep hunting in some cases over 24 hours after the coin was found. On the flip side some medium and large-sized teams were a bit disappointed in the later stages of the hunt when puzzles unlocked at a slower rate (particularly once all rounds were unlocked) leaving them with less puzzles to work on and creating bottlenecks to finishing the hunt. One of the overriding principles of us writing this hunt was to make it fun for small teams, and fair for large teams. The puzzle release mechanism in the MIT round(s) was fast, furious and fun. Something like 30 teams actually solved the MIT Meta and got to go on the MIT runaround and get the “mid-hunt” reward. From the beginning of our design, the puzzle release mechanism for the wonderland rounds (particularly the outer ones) was constrained to release puzzles in an already-opened round based only on correct answers in that round. The rate of how many answers in a round it took to open up the next set of puzzles in that round, and the order in which puzzles were released in a given round was designed to require focused effort on a smaller number of opened puzzles in order to progress to a point where those metas were solvable. This rate was, incidentally, tweaked to be somewhat lower on Friday night (but only for the two rounds no team had opened yet) in a concerted effort to make sure the coin wasn’t found as early as 6-8pm on Saturday. Coming from a large team myself, I have seen the effect of the explosion of team size on the dynamics of Mystery Hunt. This is an issue that teams will face for years to come, and everyone may choose to solve it a different way. But once again, our overriding goal was to make the hunt fun for small teams, and fair for large teams, and I think we did just that.
For the curious, and to those running the hunt next year, our server setup was fairly simple. We had one backend server which ran a database and all of the queue-handling and hunt HQ parts of the spoilr software (in django via mod_wsgi and apache). There were two frontend servers which shared a common filesystem mount with the backend server so all teams saw the consistent view of unlocks. Each team gets its own login and home directory which controls their view of the hunt when the spoilr software updates symlinks or changes the HTML files there. The spoilr software on the frontends handled answer submissions and contact HQ requests among some other things, but they were mostly just static web servers. We didn’t need two for load reasons, we just had both running for redundancy in case one pooped out over the weekend. However, splitting the dynamic queue operations and Hunt HQ dashboards off from the web servers that 1500+ hunters were hitting for the hunt was a necessity. Each of the front ends also acted as a full streaming replica of the database on the backend server, and we had a failover script ready so the hunt could continue even if the backend server and database failed somehow. There was also a streaming database replica and hunt server in another colocation facility in Chicago in case somehow both datacenters that the hunt servers were in failed or lost internet connectivity. I’d like to thank Axcelx Technologies for providing us with hosting and support, and would recommend them to anyone looking for a reasonably priced virtual server provider or collocation provider.
As far as writing the hunt goes, we used the now-standard “puzzletron” software and made a lot of improvements to that and hope to get that pushed back up to gitweb for the next team to start writing with. We had dev and test instances of puzzletron running all year so we could deploy our new features quickly and safely as our team came up with neat new things to track with it. Beyond that, we set up a mediawiki wiki, and a phpbb bulletin board, as well as several mailman mailing lists and a jabber chat server (which nobody really used). As a large team, collaboration tools have always been very important for us in trying to win the hunt, and were even more important in writing it. In retrospect, we probably should have taken more time to develop an actual electronic ticketing system (or find one to use) for the run-time operations of the hunt. Instead we ended up using paper tickets which passed back and forth between characters, queue handlers, and the run-time people. Since this hunt had so many interactions and so many teams which needed to get through them, this got clumsy and some were dropped or not checked off early in the hunt (I’m very sorry if this happened to any teams and delayed unlocks of puzzles/rounds early on).
In closing, I had a great time working on the hunt. I can’t say how great it would have been to go on it, since sadly I did not get to. But, hearing the generally positive comments from everyone thus far, I’m glad we didn’t screw it up :) The mailing list firstname.lastname@example.org will continue to work into the future, and I look forward to getting some of our code and documentation posted up for random to perhaps use and further improve upon next year, and for other teams to carry on the tradition for many years to come.
I think someone must have slipped the product development team at Nabisco some meth.
Oreos are an amazing food product. They are, in fact, probably my favorite cookie (I’m a fan of the golden variety). But what on earth would possess the makers of the greatest sandwich cookie in the universe to go on this recent insane quest to make as many different new varieties as possible.
Okay, I kind of get the motivation for candy corn Oreos. It was Halloween, after all, and that was a novelty. But looking through amazon, one is assaulted with all sorts of Oreo insanity. Aren’t they worried about brand dilution (not to mention that some of these flavors sound even more potentially-vile than candy corn):