Archive for category life

πTZ


I get questions.

My wife and I are in the middle of remodeling a 19th century victorian in Cambridge, MA (see http://rebuildingwheneverland.wordpress.com for that story).  Folks ask things like “where did you two meet?”, “where did you learn to use all of these tools and do all of this building stuff?”, “what was life like at MIT?”

I am a putzen.  MIT class of 2000.  I lived at East Campus, specifically the hall known as “putz”, “PTZ”, “πTZ”, or Second West and this is an attempt to explain what that means.

Here’s a panoramic picture of H204, the room I lived in during most of college.  This picture was taken the year after I graduated, after I had passed it on to the esteemed Mr. Ryan Williams (a.k.a. “breath”):

h204room_panorama

On August 21st 1996, I arrived from the snowbound hellscape that was my childhood home of Rochester, New York and walked into the student center at MIT in Cambridge, MA.  Back then when you got to MIT you didn’t know where you were going to live and classes didn’t start for another two weeks.  This was a wonderful time called “R/O” or Campus Rush / Orientation.  All of the undergraduate dorms and frats (yes you could move right into a frat then as a freshman at MIT — this was before some unfortunate idiot drank himself to death in 1997 and ruined everything for everyone else) competed for the freshmen and we got to tour around, get free food, and pick where we’d live.

I had a “temp” room in a nice clean, ordinary boring dorm about a 20 minute walk from main campus.  I immediately fell in love with a rough-and-tumble, less maintained, cheaper-to-live-in, dorm that was closer to classes called “MIT East Campus Alumni Memorial Housing” but more generally known as East Campus, or just EC.  However, it was not my first choice for the housing lottery (or even my second).  I didn’t think I was weird or quirky enough to live there, and honestly it’s a little bit intimidating for a 17 year old kid from the suburbs, not to mention a little bit dirty and grubby.

They build roller coasters in the courtyard.  The walls are covered with hand-painted murals done by residents every year.  Two of the ten “halls” allowed smoking (and still do).  Several even let students bring cats!  Students regularly build elaborate “lofts” in their room out of scrap wood, allowing them to sleep on an elevated platform while having more studying and living space underneath.  They blow stuff up in the dorm courtyard on a semi-regular basis.

Here’s the EC “dorm rush” video from last year.  Please note that pretty much none of the stuff in this video was done just for the video.  This was filming of regular dorm activities like campus rush, spring picnic, fred fest, and regular everyday merriment:

Here’s an older one featuring an actual roller coaster and showing that really not much has changed in the past 10 years:

Fate ended up smiling upon me and I ended up getting assigned to East Campus in the housing lottery.  A few days later, in the courtyard as all of us new residents were gathering around on the benches and tables out there, a group of 5 or 6 of us coalesced and started talking.  We would all end up becoming friends for the next four years (some of us for life), and one of them would eventually become my wife.

But hang on, I haven’t even gotten to putz yet. For rush at MIT not only did we get to pick what dorm we lived in, once we got to EC we got to (and this is still the case) pick what hall we wanted to live in.  This means that each of the 10 halls ends up with its own unique personality.  As I mentioned earlier, some allow smoking, some allow cats.  Some are quiet, some are more mechanically or technically oriented, some are more into sports (intramural or otherwise), etc. etc.  East Campus is a dorm with two separate parallel buildings (east and west) with 5 floors each.  Thus the floors are designated with the numbers 1-5 (by floor) and East or West (based on what parallel the floor is in).

I chose and ended up on Second West.  For rush every year they line of the hall with a tarp, put a bunch of mattresses on the end, get out a hose and some soap and do an indoor slip-n-slide.  They did this 20 years ago when I was a wee frosh and they still do it today.  Some traditions run deep.  Second West is also known affectionately by its nickname “putz” or πtz (story of this nickname follows later).

Every year for the past 23 years (at least) on the Saturday or Sunday of the weekend before thanksgiving on 2nd West, something special happens.  It’s called “putzgiving” and I’ve been there every year for the last 11 of them (while I’ve been back living in Boston).  For the past 5 years I’ve made a turkey for it and helped to feed the roughly 60-80 current residents and alums that come back every year.  It’s a fabulous delicious feast of food, camaraderie, and reminiscing.  For the putzgiving celebrating the somewhat arbitrary “20th year of putz” back in 2012, Mark Feldmeier, one of the older alums (even older than me) gave a brief speech marking the occasion and touching a little bit upon what makes putz “different”:

Putz isn’t a frat, but it bears some communal characteristics similar to one.  In fact, in the early days, the letters πTZ were put on signs and t-shirts to make fun of fraternities.  A regular pastime of the hall during old-time frat rush was to go out and steal the signs of real fraternities and paint over them with πTZ in large letters.  But as the Big Chicken (Mark) says in that video, over the past 20 years things have changed a lot but the community of putz remains.  There’s a longevity to it, and it’s remarkably cool that people come back for events, get together out of town, form business together, write songs, go to each others weddings, or even get married and have children themselves.

What else is there to say?

Always mechanically and/or computationally oriented, putz has built robots, constructed many in-room lofts of varying construction quality, and in one case even chopped a huge hole in a room wall to put in a fish tank (which I think might still be there).  In 1996 the oldest continually operating webcam sprung to life (https://intotheweeds.org/2012/11/22/a-look-back-in-time-golden-age-of-the-internet/).  In 2000 a large music storage and playing linux webserver and sound system was built to allow for remote queueing, organization, and streaming of music into the hall’s lounges and bathrooms.  In 2005, the Time Travelers Convention was held and even got mocked on SNL by Tina Fey.  In both 2004 and 2014 Putz/East Campus teams won the annual MIT Mystery Hunt.  Putz alums have been featured on Jeopardy (unfortunately losing) and Battlebots (also, unfortunately, losing) over the years as well.  We also have putzen who have proudly served in the military, in the priesthood, and just about every field in between.

I’ll close this post with a bit of trivia.  A message from Abe Farag explaining a bit about the origin and history of πTZ:

“…The 1994 class of 2nd west attracted some calm academic kids- Physics majors: Gunther, Rob, Joe, David C – who moved floors I think? (Oh my this was a long time ago)
But somehow in the 1995 class 2nd west attracted a cast of characters with some Spunk. I don’t know how that happened. Like the cosmos forming new stars out of the either. Maybe a masterpiece is easier to create on a blank palette then one already crowded with some message.

The 1996 rush was even fun & then with the 1996 class a flood gate of fun rolled in. I have no idea how “PTZ” started. I did play a lot of hack hockey & did poke fun a lot of our D-level game. I’m honored to think I might have started PTZ- but I don’t think it was me.

If I called us Putzes it was just to be silly. Not to start a club. 

In early 1992 Gunther made a hall t-shirt that said “We’d make bad elves” which was a drawing of an elv that had built a 3 armed doll & santa hitting him. I think of that as a precursor of the feeling of us being Putzes if not the exact name. We were so lame that we made t-shirts of our IM teams that made fun of how lame we were. But I’m sure that Mark yelling was a big part of the ethos of Putz. YO Mark.

However the name bubbled up I think PTZ stuck because of the fun link to pseudo Frat & rush & because the feeling of not having an identity at the time – of being an outcast floor- which we were at the time.

I also think it was 1993 Rush that some Brave & BOLD Putzen went to the great Killian dome RUSH kickoff event with a PTZ sign & PTZ t-shirts & like pied pipers lead some incoming kids to ec. I think 1 tenet of PTZ is that Making Physical stuff leads to fun. That was a Pivotal moment for PTZ.

To all Yee Putz… enjoy the great legacy you are part of.
GO forth & Putz.
+Abe 
PTZ- 1994.”

1 Comment

It “snow” comparison


UPDATE: Here is a composite picture that tells part of the story. Click for the full-size image. On the left is from last weekend (February 7th, 4:11pm). On the right is this morning (February 15th, 11:15am). The walls and mountains of snow just keep growing. This includes another 14″ or so that has been added in the past 24 hours (after writing the blog post below).

blizzard

Please excuse the pun in the title (or don’t). If you haven’t heard, Boston has gotten an unprecedented amount of snow over the past three weeks and will probably end up with about 70″ within a 30-day period.

I come from Rochester, New York. Depending on your statistics, it is considered one of the snowiest cities in the U.S (and sometimes is #1 on that list). So my thought has been: have I just been living outside of the snow belt for so long that what used to not be a big deal in my normal winter experience is now completely bizarre to me?

Let’s start by looking at the current leaders of this season’s “snow bowl“. Boston is right there in between Buffalo, Syracuse and Rochester, and is behind the total amount of snow that snow-belt cities Buffalo and Syracuse had at this point last year, and about equal with Rochester. So yes, this is an unbelievably large amount of snow for Boston, but not unprecedented for some urban areas.

The statements so far from the mayor and officials from the local mass transit authority (MBTA) indicate that the real issue here has been an inability to clear out snow from streets, train tracks, and to maintain equipment which the snow and cold has damaged. Mass transit is completely shut down tomorrow again, and a snow emergency continues. This is a good move, in my opinion, because we need an extra day of cars and people not being out there so that the city can do something about removing the mountains of snow.

Re-opening the city so soon after the first storm two weeks ago was, in retrospect, what got us into the current mess. You can only shove so much snow aside before there’s just too much of it and the streets narrow to be impassable. This is precisely the situation we were in here for most of last week. Traffic was at a standstill as two-lane streets became one (or even 1/2 lane) with parked cars and mountains of snow competing for road space. Lets hope the crews make some good progress cleaning up the streets and sidewalks tomorrow while we’re all home from work. I also hope that the embattled MBTA can get its act together. However the lack of budgetary attention paid to that agency from the state over the past few years, and downright animosity from residents of the western suburban and rural part of the state when faced with their tax dollars paying for “city” infrastructure has left the agency in a spot where they just don’t have enough resources to remove all of the snow and repair/maintain equipment. By some estimates last week about half of the trains on certain subway lines were out of service due to weather-related malfunction.

So, good luck to them. And they should get a move-on because the current NOAA forecast discussion indicates a good chance of a potentially significant storm this coming Thursday, and another one Saturday into Sunday.

Now, back to my original question about 70″ in a month. I looked around for record of this happening in a city before (particuarly a major city, like Boston). I found a few of similar incidents in smaller cities though — all in the “snow belt” region of Buffalo-Rochester-Syracuse, of course. Back in 1985, Buffalo had 68 inches of snow in December. That’s less than we’re facing here now, and in a city that’s smaller and probably has a much easier time with snow removal and street clearing (not to mention no large subway/trolley system to also keep clear). In December 2001, the city of Buffalo had a record 83 inches of snowfall, with a maximum of 44 inches on the ground at one point. That seems to indicate to me that there was some sort of thawing in between — a luxury we have not had here in the city of Boston over the past three weeks. In Syracuse, however, there was also a 64 inch December, and supposedly a 97 inch January back in 1966 that I really want to find and read some more about. But other than those few anomalies? I couldn’t find a month of more than 52 inches of snow for Buffalo, Rochester, or Syracuse.

So yeah, 70″ in a less than a month is very very rare for *any* place — even the snowiest cities in this country that deal with blizzards regularly. It’s certainly unprecedented for a major city with a multi-mode mass-transit system and a population over 640,000 (4.5 million in the “greater boston” MSA). In other words, I haven’t gone soft. This really is a whole lot of snow.

Leave a comment

Ranking The Months From Best To Worst


From best to worst:

1. September

2. April

3. October

4. August

5. July

6. June

7. May

8. November

9. January

10. December

11. March

12. February

Agree?  Disagree?  Discuss!

3 Comments

Bought a House — See New Blog!


So, for those who haven’t been privy to the news, Kristy and I bought a house.  Rather than clog up this blog with that stuff though, I’ve started a new one:

http://rebuildingwheneverland.wordpress.com

First post is up with the basic story and some “before” pictures.  We close officially tomorrow afternoon, and moving day is September 23rd.  As you can see if you look at that blog and gallery, we’ve got a lot of work ahead of us!

House From The Street

House From The Street

Leave a comment

The Work That Makes Civilized Life Possible (and finding the people to do it)


“So what exactly would you say you do here?”

I’ve flown out to remote locations and been on-site for the build-out and spin-up of three new production data centers within the last 10 months. I’ve been present for load tests and at public launches of new video games’ online services and product and feature launches to predict and solve system load issues from rushes of new customers hitting new code, networks, and servers. And yes, I’ve spent my share of all-nighters in war rooms and in server rooms troubleshooting incidents and complicated failure events that have taken parts of web sites, or entire online properties offline. I wasn’t personally involved in fixing healthcare.gov late last year, but that team was made up of people I would consider my peers, and some people that have specifically have been co-workers of mine in the past.

Do you use the internet? Ever buy anything online? Use Facebook? Have a Netflix account? Ever do a search on Duck Duck Go or use Google? Do you have a bank account? Do you have a criminal record, or not? Ever been pulled over? Have you made a phone call in the past 10 years? Is your metadata being collected by the NSA? Have you ever been to the hospital, doctor’s office, or pharmacy? Do you play video games? If you’ve answered yes to any of the above questions, then a portion of your life (and livelihood) depend on a particular group of professional engineers that do what I do. No, we are not a secret society of illuminati or lizard people. We do, however, work mostly in the background, away from the spotlight, and ensure the correct operation of many parts of our modern, digital world.

So what do we call ourselves? That’s often the first challenge I face when someone asks me what I do for a living. My job titles, and the titles of my peers, have changed over the years. Some of us were called “operators” back in the early days of room-sized computers and massive tape drives. When I graduated college and got my first job I was referred to as a “systems administrator” or “sysadmin” for short. These days, the skill sets required to keep increasingly varied and complex digital infrastructure functioning properly have become specialized enough that this is almost universally considered a distinct field of engineering rather than just “administration” or “operations”. We often refer to ourselves now as “systems engineers,” “systems architects,” “production engineers,” or to use a term coined at Google but now used more widely, “site reliability engineers.”

What does my job entail specifically? There are scripting languages, automated configuration and server deployment packages, common technology standards, and large amounts of monitoring and metrics feedback from the complex systems that we create and work on. These are the tools we need to scale to handle growing populations of customers and increased traffic every day. This is a somewhat unique skill set and engineering field. Many of us have computer science degrees (I happen to), but many of us don’t. Most of the skills and techniques I use to do my job were not learned in school, but through my years of experience and an informal system of mentorship and apprenticeship in this odd guild. I wouldn’t consider myself a software engineer, but I know how to program in several languages. I didn’t write any of the code or design any of our website, but my team and teams like it are responsible for deploying that code and services, monitoring function, making sure the underlying servers, network and operating systems function properly, and maintaining operations through growth and evolution of the product, spikes in traffic, and any other unusual things.

“Skill shortage”

Back in 2001, I was working for the University of Illiniois at Urbana-Champaign for the campus information services department (then known as CCSO) as a primary engineer of the campus email and file storage systems. Both were rather large by 2001 standards, with over 60,000 accounts and about a terabyte (omg a whole terabyte!) of storage. This was still in the early part of the exponential growth of the internet and digital services. I remember a presentation by Sun Microsystems in which they stated that given the current growth rates and server/admin ratios, by 2015 about ⅓ of the U.S. Population would need to be sysadmins of some sort. They were probably right, but the good news is that since then our job has shifted mostly to finding efficiencies and making the management of systems and services of ever-growing scales and complexity possible without actual manual administration or operation — so the server/admin ratio has gone down dramatically since then. Back then it was around 1 admin for every 25 servers in an academic environment like UIUC. Today, the common ratios in industry range from a few hundred to a few thousand servers per engineer. I don’t think I’m allowed to say publicly what the specific numbers are here at TripAdvisor, but it is within that range. But, we still need new engineers every day to meet needs as the internet scales, and as we need to find even more efficiencies to continue to crank that ratio up.

Where do the production operations engineers come from? Many of us are ex-military, went to trade schools, or came to the career through a desire to tinker unrelated to college training. As I stated earlier, while a degree in computer science helps a lot understanding the foundations of what I do, many of the best engineers I’ve had the pleasure of working with are art, philosophy, or rhetoric majors. In hiring, we look for people who have strong problem solving desires and abilities, people who handle pressure well, who sometimes like to break things or take them apart to see how they work, and people who are flexible and open to changing requirements and environments. I believe that, because for a while computers just “worked” for people, a whole generation of young people in college, or just graduating college, never had the need or interest to look under the hood at how systems and networks work. In contrast, while I was in college, we had to compile our own linux kernels to get video support working, and do endless troubleshooting on our own computers just to make them usable for coding and, in some cases, daily operation on the campus network.

So generally speaking, recent college graduates trained in computer science have tended to gravitate towards the more “glamorous” software engineering and design positions, and continue to. How do we attract more interest in our open positions, and in the career as an option as early as college? I don’t have a good answer for that. I’ve asked my peers, and many of them don’t know either. I was thrilled to go to the 2014 SREcon in Santa Clara earlier this month (https://www.usenix.org/conference/srecon14), and for the most part the discussion panels there and the engineers and managers there from all the big Silicon Valley outfits (Facebook, Google, Twitter, Dropbox, etc.) face the same problem. It’s admittedly even worse for us at TripAdvisor as an east coast company fighting against the inexorable pull of Silicon Valley on the talent pool here.

One thing I’ve come to strongly believe, and which I think is becoming the norm in industry operations groups, is that we need to broaden our hiring windows more. We need to attract young talent and bring in the young engineers, who may not even be strictly sure that they want an operations or devops career, and show them how awesome and cool it really is (ok, at least I think it is). To this end, I gave a talk at MIT a little over a year ago on this subject — check out the slides and notes here. I didn’t know that this is what I wanted to do for sure until about a week before I graduated from MIT in 2000. I had two post-graduation job offers on the table, and I chose a position as an entry-level UNIX systems administrator at Massachusetts General Hospital (radiation oncology department, to be more specific) over the higher paying Java software engineering job at some outfit named Lava Stream (which as far as I can tell does not exist anymore). Turns out I made the right decision. The rest of my career history is in my LinkedIn profile (https://www.linkedin.com/profile/view?id=8091411) if anyone is curious. No, I’m not looking for a new job.

“Now (and forever) Hiring”

So, if anyone reading this is entering college, or just leaving college, or thinking of a career change, give operations some consideration. Maybe teach yourself some Linux skills. Take some online classes if you have time or think you need to. Brush up your python and shell scripting skills. At least become a hobbyist at home and figure out some of those skills you see in our open job positions (nagios, Apache, puppet, Hadoop, redis, whatever). Who knows, you might like it, and find yourself in a career where recruiters call you every other day and you can pretty much name your own salary and company you want to work for.

And specifically for my group at TripAdvisor? We manage the world’s largest travel site’s production infrastructure. It’s a fast-moving speed-wins type of place (see my previous blog post) and we are hiring. Any of this sound interesting to you? Even if you don’t think you fit any of the descriptions below but might be up for some mentoring/training and maybe an internship or more entry-level position, tweet at me or drop me an email and we’ll see what we can do. See you out there on the internets.

Job Opening: Technical Operations Engineer

TripAdvisor is seeking a senior-level production operations engineer to
join our technical operations team. The primary focus of the technical
operations team is the build-out and ongoing management of Tripadvisor’s
production systems and infrastructure.

You will be designing, implementing, maintaining, and troubleshooting
systems that run the world's largest travel site across several
datacenters and continents. TripAdvisor is a very fast growing and
innovative site, and our technical operations engineers require the
flexibility, and knowledge to adapt to and respond to challenging and
novel situations every day.

A successful candidate for this role must have strong system and network
troubleshooting skills, a desire for automation, and a willingness to
tackle problems quickly and at scale all the way from the hardware and
kernel level, up the stack to our database, backend, web services and
code.

Some Responsibilities:
- Monitoring/trending of production systems and network
- General linux systems administration
- Troubleshooting performance issues
- DNS and Authentication administration
- Datacenter, network build-outs to support continued growth
- Network management and administration
- Part of a 24x7 emergency response team

Some Desired Qualifications:
- Deep knowledge of Linux
- Experienced in use of scripting and programming languages 
- Experience with high traffic, internet-facing services
- Experience with alerting and trending packages like Nagios, Cacti
- Experience with environment automation tools (puppet, kickstart, etc.)
- Experience with virtualization technology (KVM preferred)
- Experience with network switches, routers and firewalls

Job Opening:  Information Security Engineer

TripAdvisor is seeking an Information Security Engineer to join our 
operations team. You will be charged with the responsibilities for 
overall information security for all the systems powering our sites, the 
information workflow for the sites and operational procedures, as well 
as the access of information from offices and remotes work locations.

Do you have the talent to not only design, but actually implement and 
potentially automate firewall, IDS/IPS configuration changes and manage 
day-to-day operations? Can you implement and manage vulnerability scans, 
penetration tests and audit security infrastructure?

You will be collaborating with product owners, product engineers, 
operations engineers to understand business priorities and goals, company 
culture, development processes, operational processes to identify risks 
and then work with teams on designing and implementing solutions or 
mitigations. You will be the information security expert in the company 
that track and monitor new/emerging vulnerabilities, exploitation 
techniques and attack vectors, as well as evaluate their impacts on 
services in production and under development. You will provide support 
for audit and remediation activities. You will be working hands-on on our 
production systems and network equipment to enact policy and maintain a 
secure and scalable environment.

Desired Skills and Experience

* BSc or higher degree in Computing Science or equivalent desired
* Relevant work experience (10+ yrs) in securing systemsand infrastructure
* Prior experience in penetration testing, vulnerability management, forensics
* Require prior experience in the area of IDS/IPS, firewall config/management
* Experience with high traffic, Internet-facing services
* Ability to understand and integrate business drivers and priorities into design
* Strong problem solving and analytical skills
* Strong communication skills with both product management and engineering
* Familiar with OWASP Top-10
* Relevant certifications (CISSP, GIAC Gold/Platinum, and CISM) a plus

1 Comment

The 2014 MIT Mystery Hunt – Alice Shrugged


Winning the hunt in 2013

Winning the hunt in 2013


Calling the winning team in 2014

Calling the winning team in 2014

So we ran the MIT Mystery Hunt this year (our dubious award for winning it last year). The experience is pretty well bookended by the above two pictures: one of Laura answering our phone in 2013 to hear that we had answered the final meta successfully and won, and another one of Laura calling the winning team in 2014 (One fish, two fish, random fish, blue fish) to congratulate them on answering the final meta successfully. I have no idea how or where to begin describing what it was like to do this this year. My team ran the hunt in 2004, but I was out of town in Champaign, IL at the time and played no part in that little misadventure. This time around, I was on the leadership committee, in charge of hunt systems, IT and infrastructure.

Thank You

First I’d like to thank the other members of the systems team. James Clark just about single-handedly wrote a new django app and framework (based loosely on techniques and code from the 2012 codex hunt) which we will be putting up on github soonish and we hope that other teams can make use of it in the future — we have dubbed it “spoilr”. It worked remarkably well, and has several innovative features that I think will serve hunt well for years to come. Joel Miller and Alejandro Sedeño were my on-site server admin helpers and helped keep things running and further adjusted code (although only slightly) during the hunt. Josh Randall was our veteran team member on call in England (which helped because he was available during shifted hours for us). And Matt Goldstein set up our HQ call center and auto-dial app with VoIP phones provided by IS&T.

With the exception of only a few issues (which I’ll try to address below), from the systems side of things the hunt ran extremely well. We were the first hunt in a while to actually start on time, and we were also the first hunt in a while to actually have the solutions and hunt solve statistics up and posted by wrap-up on Monday. This hunt had a record number of participants, and a record number of teams, (both higher than we planned when designing and testing the system) making our job all the more difficult. And of course, I’d like to join everyone else in thanking the entire team of Alice Shrugged that made this hunt possible. It was great working with you all year and pulling off what many feel was a fantastic hunt.

Hunt Archive and Favorites

To look at the actual hunt, including all puzzles and solutions, and some team and hunt statistics, go to the 2014 Hunt Archive. My favorite puzzles (since everyone seems to be asking) were: Callooh Callay World and Crow Facts. Okay, I guess Stalk Us Maybe was pretty neat too.

Apologies

First of all, I’d like to apologize on behalf of the systems team for the issues with Walk Across Some Dungeons. This performed artificially well during our load test, but load test clients are far more well behaved than actual real people on a real network. There were several socket locking and connection starvation issues with the puzzle even after we spent all day (and night) Friday parallelizing it onto as many as 7 virtual servers. Eventually we patched the code to allow for better handling of dropped connections and to be more multi-threaded within each app instance and by late Saturday night it was working much better. The author has since ported it to javascript, and it should work fine well into the future. Lesson to future teams: don’t try to write your own socket-handling code or any server-side code for that matter that has to interact with the hunters. The issues surrounding the puzzle (lag, and the frequent reset back to stage 1 requiring many clicks to get back to your position) affected every team equally for the first day and were not fixed until at least the top 5 teams had already solved the puzzle in its “difficult” state. So luckily this was not a fairness issue, it just made a puzzle a LOT harder than it should have been.

Second, there was an interesting issue with our hunt republishing code. At times during the hunt there were errata (remarkably few, actually) and some point-value and unlocking-system changes (will mention more about this below) that required a full republish of the hunt for all teams. This is not unusual. However, with the number of teams and hunters and the pace our call handlers (particularly Zoz the queue-handling machine) were progressing teams through the hunt on Friday in particular, this created a race condition. If any puzzle unlocks happened during one of these republishes, they would be put back to the state they were when the publish started. Since a republish takes a looooooong time for all of these teams and puzzles, a number of teams noticed “disappearing” puzzles and unlocks on Friday while we were updating our first erratas in puzzles and then later Friday night when we changed the value of Wonderland train tickets to slow the hunt down a bit. We alleviated this slightly in the spoilr code by making the republish iterate team by team rather than take its state of the whole hunt at the beginning and then apply it to everyone. By later on Friday though, teams had enough puzzles unlocked that even just republishing for a team had a risk of coinciding with a puzzle unlock, so we simply froze the handling of the call-in queue while we were doing these. As a note for future teams, this could probably be fixed by making the republish work more transactional in the code.

Release Rate, Fairness, and Fun

On this subject I can not pretend to speak for the whole team (nor can anyone probably), but I will share what I experienced and what I think about it. Many medium and small-sized teams have written to congratulate us on running a hunt that was fun for them and that encouraged teams to keep hunting in some cases over 24 hours after the coin was found. On the flip side some medium and large-sized teams were a bit disappointed in the later stages of the hunt when puzzles unlocked at a slower rate (particularly once all rounds were unlocked) leaving them with less puzzles to work on and creating bottlenecks to finishing the hunt. One of the overriding principles of us writing this hunt was to make it fun for small teams, and fair for large teams. The puzzle release mechanism in the MIT round(s) was fast, furious and fun. Something like 30 teams actually solved the MIT Meta and got to go on the MIT runaround and get the “mid-hunt” reward. From the beginning of our design, the puzzle release mechanism for the wonderland rounds (particularly the outer ones) was constrained to release puzzles in an already-opened round based only on correct answers in that round. The rate of how many answers in a round it took to open up the next set of puzzles in that round, and the order in which puzzles were released in a given round was designed to require focused effort on a smaller number of opened puzzles in order to progress to a point where those metas were solvable. This rate was, incidentally, tweaked to be somewhat lower on Friday night (but only for the two rounds no team had opened yet) in a concerted effort to make sure the coin wasn’t found as early as 6-8pm on Saturday. Coming from a large team myself, I have seen the effect of the explosion of team size on the dynamics of Mystery Hunt. This is an issue that teams will face for years to come, and everyone may choose to solve it a different way. But once again, our overriding goal was to make the hunt fun for small teams, and fair for large teams, and I think we did just that.

Architecture Overview

For the curious, and to those running the hunt next year, our server setup was fairly simple. We had one backend server which ran a database and all of the queue-handling and hunt HQ parts of the spoilr software (in django via mod_wsgi and apache). There were two frontend servers which shared a common filesystem mount with the backend server so all teams saw the consistent view of unlocks. Each team gets its own login and home directory which controls their view of the hunt when the spoilr software updates symlinks or changes the HTML files there. The spoilr software on the frontends handled answer submissions and contact HQ requests among some other things, but they were mostly just static web servers. We didn’t need two for load reasons, we just had both running for redundancy in case one pooped out over the weekend. However, splitting the dynamic queue operations and Hunt HQ dashboards off from the web servers that 1500+ hunters were hitting for the hunt was a necessity. Each of the front ends also acted as a full streaming replica of the database on the backend server, and we had a failover script ready so the hunt could continue even if the backend server and database failed somehow. There was also a streaming database replica and hunt server in another colocation facility in Chicago in case somehow both datacenters that the hunt servers were in failed or lost internet connectivity. I’d like to thank Axcelx Technologies for providing us with hosting and support, and would recommend them to anyone looking for a reasonably priced virtual server provider or collocation provider.

As far as writing the hunt goes, we used the now-standard “puzzletron” software and made a lot of improvements to that and hope to get that pushed back up to gitweb for the next team to start writing with. We had dev and test instances of puzzletron running all year so we could deploy our new features quickly and safely as our team came up with neat new things to track with it. Beyond that, we set up a mediawiki wiki, and a phpbb bulletin board, as well as several mailman mailing lists and a jabber chat server (which nobody really used). As a large team, collaboration tools have always been very important for us in trying to win the hunt, and were even more important in writing it. In retrospect, we probably should have taken more time to develop an actual electronic ticketing system (or find one to use) for the run-time operations of the hunt. Instead we ended up using paper tickets which passed back and forth between characters, queue handlers, and the run-time people. Since this hunt had so many interactions and so many teams which needed to get through them, this got clumsy and some were dropped or not checked off early in the hunt (I’m very sorry if this happened to any teams and delayed unlocks of puzzles/rounds early on).

In Closing

In closing, I had a great time working on the hunt. I can’t say how great it would have been to go on it, since sadly I did not get to. But, hearing the generally positive comments from everyone thus far, I’m glad we didn’t screw it up :) The mailing list aliceshrugged@aliceshrugged.com will continue to work into the future, and I look forward to getting some of our code and documentation posted up for random to perhaps use and further improve upon next year, and for other teams to carry on the tradition for many years to come.

6 Comments

Oreo Insanity


I think someone must have slipped the product development team at Nabisco some meth.

Oreos are an amazing food product. They are, in fact, probably my favorite cookie (I’m a fan of the golden variety). But what on earth would possess the makers of the greatest sandwich cookie in the universe to go on this recent insane quest to make as many different new varieties as possible.

Okay, I kind of get the motivation for candy corn Oreos. It was Halloween, after all, and that was a novelty. But looking through amazon, one is assaulted with all sorts of Oreo insanity. Aren’t they worried about brand dilution (not to mention that some of these flavors sound even more potentially-vile than candy corn):

candycaneoreoCandy Cane Oreos

winteroreoWinter Oreos

gingerbreadoreoGingerbread Oreos

candycornoreosCandy Corn Oreos

magastuforeoMega Stuf Oreos

watermelonoreoWatermelon Oreos

coolmintoreoCool Mint Oreos

bananasplitoreoBanana Split Oreos

berrybursticecreamoreoBerry Burst Ice Cream Oreos

halloweenoreoHalloween Oreos

peanutbutteroreoPeanut Butter Oreos

tripledoubleoreoTriple Double Oreos

neopolitanoreoTriple Double Neopolitan Oreos

1 Comment

MIT Mystery Hunt 2013 (a.k.a. The Misery Hunt)


Happy June everyone. Back in January, I had the privilege of being on the winning team of the 2013 MIT IAP Mystery Hunt (pretty sure I already mentioned that a couple of posts ago). For those unaware, we were a huge team (~100+ people), and the name of our team was the full text of Ayn Rand’s Atlas Shrugged. Whenever we’d communicate with hunt HQ, we’d continue reading the text until they made us stop (or let us stop). Among others thought this would be a neat, clever idea — maybe even “cute.”  In reality, however, it turned out to just add to the pain and misery of what turned out to be an already painful and misery-drenched (but also somewhat fun) hunt. Tired-sounding reading of the rambling Randian prose quickly became the leitmotif for the weekend.

Other people have already shared their opinions and experiences about the hunt (google “2013: the year the mystery hunt broke” for an example).  The organizers (Manic Sages) have, in my opinion, already gotten more than enough criticism dumped on them for putting together the longest weekend in hunt history that almost ended in hunt ending by decree or draw — which would have been disastrous for the 2013 hunt, as well as the concept and tradition of the mystery hunt moving forward (in my opinion).

The word “grueling” was the one word I used most when people asked me what the hunt was like this year.  I’ve participated in the hunts with this team consistently for the past 5 years and on and off going back to 2004 (the last year our team won) and earlier.  I spent four years at MIT, with all of the all-nighters, failing grades, and frustration that that entails.  But I’ll be damned if the 2013 mystery hunt wasn’t one of the most intellectually demoralizing experiences of my life. Is that such a bad thing? In retrospect,  I’m not so sure. Challenging experiences and “rolling up one’s sleeves and getting to work” (sometimes by doing insane statistical analysis on endless streams of random numbers) are ways in which we attain personal growth — right?  Maybe if it was just as difficult, but shorter.  Maybe if it had some more fun and games mixed in.  Maybe if we didn’t decide to do that stupid Atlas Shrugged thing.  Maybe then the hunt would have been FUN as well as just grueling — and wouldn’t have left me with hunt PTSD.  Seriously, I’m not alone on my team in having had nightmares up to a week after hunt about still doing the hunt, or still needing to solve a meta.

I’m not going to bore everyone with detailed stories of extremely difficult puzzles with perhaps one-too-many “a-ha!” moments necessary to solve, or the detailed methods our team uses to keep fresh shifts of solvers moving in and out of the room, taking naps, and ultimately winning the battle of attrition that the last 24 hours of the weekend became.  But I will recount my tale of how the hunt ended (from my perspective).

The beginning of the end was 8pm Sunday night (already over 16 hours after the point the 2012 mystery hunt had ended in its weekend). We already knew by that point that this was going to be a hunt for the ages.  My team had that glazed-over deer-in-the-headlights look that comes from being up for 20-30+ hours in some cases doing extreme mental gymnastics. An email came in from HQ reading: “Our honest estimate of hunt’s end is Monday at 9AM given what we’ve seen of solving rates on our puzzles so far.”  At this time, I was on my way out to a room to sleep for a quick 4 hours (or until hunt ended).  It turns out that not only was hunt nowhere near ending, but with an end-time of 9AM, they were predicting it to surpass the 2004 hunt for the all-time duration record.  And who had written the 2004 hunt?  Our team — then known as “French Armada” (because the team wanted to wear funny hats — from what I hear).  When I woke up to my alarm 4 hours later, my disillusionment with the mystery hunt had turned into a sort of prideful anger.  How dare they assume that their hunt will be even more un-defeatable than the one we’d written (somewhat poorly) a decade earlier?  At around 2am, the late-night shift of fresh puzzlers dug in and I, for one, was hoping to prove the Manic Sages’ assumption wrong and to keep our dubious record of “longest hunt ever.”

But it was not to be. At 6AM, or so, I went back in for another brief nap. 9AM came and went, and another shift of freshly-napped hunters came in. The 2004 French Armada’s hunt length record had fallen.  Free answers to puzzles were getting handed out every 20 minutes now to help draw things to a close.  The requirements for finishing hunt were changed so that one full meta-puzzle (out of 5 total) could be skipped entirely. For those unfamiliar with the standard mechanics of a mystery hunt, there are generally puzzles in “rounds”, and then for each round (or group of rounds), the answers of the puzzles plug into a “meta” puzzle.  Once all meta-puzzles are complete, the team is eligible to go on a final “runaround” (involving, literally, running around and solving more puzzles) and ultimately win the hunt.  So, eliminating an entire meta-puzzle requirement was a big deal — and up until this year, unheard of (at least by me).

At some point on Monday morning, one of our freshmen (Lauren Herring) had been sitting in the same spot working on one of the metas (“The Enigma”) for what seemed like 12 hours.  She’d be sitting there when I left for a nap, and she’d be in the same spot, wild-eyed and turning those same infernal rolls of paper when I got back to the room hours later.  And so eventually, that meta got solved.  And that left us needing exactly one more meta to get to the runaround.  One of them (“Rubik”) seemed totally impossible and we had made little progress on it at all from what I could see.  The other one (“Indiana Jones”) was getting churned on slowly at a table by puzzlers including our “old guard” — the bleary-eyed Mark Feldmeier and Zoz Brooks — with the whole team cheering them on.  Actually it was more nervous pacing, drinking coffee, and watching vs. audible “cheering.”  We were getting close by 10-11am, and calling HQ regularly for hints and clues.  We had even called in to verify a partial answer for it — “hey are the first 8 letters of the answer this?” (this is also unheard of in a mystery hunt) — only to get rebuked.

And then something weird happened.  Our team phone rang, and Manic Sages’ HQ was on the other end.  The inimitable Laura Royden was “manning” the phone at the time and dealing with team-wide organization (we call it “puzzle bitch”ing).  It turns out that an offer of settlement/surrender had been made and was being brokered by the Manic Sages in the interest of ending hunt.  The terms were to stop hunting now and whatever team was deemed the “furthest ahead” by the Sages would be declared the winner.  After being at it for over 70 hours at that point, it was a tempting prospect.  The team huddled together, and Laura told HQ that we’d call them back in “a few minutes”.  How far could any other team possibly be if they were willing to make this offer? We heard rumors of other competitive teams giving up and packing in to go home by this point, leaving us as one of the few teams insane and stubborn enough to still be trying to win.  We knew we only needed one more meta but were, for the moment, scratching our heads on what we were doing wrong with “Indiana Jones.” A couple of the more senior puzzlers on our team (Dan Katz and Erin Rhode at least from what I remember) immediately leaned towards rejecting the offer.  Then we got another call from HQ, and they told us that they’d made a terrible mistake and our partial answer check from earlier was actually on the right track. At that point the choice was clear. Not only did we know that we were the farthest ahead, but we also knew we were potentially only minutes away from winning.  To accept the other team’s surrender at that point would have perhaps been merciful, but wouldn’t have been a good thing for the 2013 hunt, or hunt as a tradition and concept (in my opinion at least).  Laura shouted clearly into the phone: “no we will not accept your offer!” And, sure enough, about 10 minutes later, she called back with the complete correct answer to our final required meta-puzzle and accepted congratulations that we had, for all intents and purposes (with the exception of the runaround), won the hunt.  Below is a picture commemorating that very moment.

424244_910363591908_1978574625_n

Once that was done, Manic Sages actually asked if we wanted to do the full runaround, or just be handed the coin and declared the winners at that point.  Staying true to tradition, even though it was almost noon on Monday at that point, we elected to make them put on the entire runaround for us.  At 3:30pm on Monday, a full 75 hours after the hunt began, the coin was found by the small subgroup of our team that was still awake (this did not include me, as I collapsed shortly after the final answer was called in and I knew we had won).

So now what?  Now our team is writing and running the 2014 IAP MIT Mystery Hunt, that’s what.  The experience of last year (and echoes of our 2004 hunt) sort of lend a feeling of “there but for the grace of god, go I” to the whole thing.  Each and every one of us knows (or should know) that it is very much possible, with the best of intentions and the smartest and most experienced people, to write a hunt that turns out to be “bad” or even maybe “a disaster.”  That’s kind of a lot of responsibility, isn’t it?  But alas, we will do our best.  Without further ado, I’ll wrap up here and introduce the board of directors of the 2014 IAP MIT Mystery Hunt.  For continued missives from our team, and guest writers talking about hunt, please visit our blog at http://mysteryhunt.wordpress.com/.  And oh yeah, good luck in 2014 everyone!

DSC_0967 copy

Galen Pickard (Executive Producer)

Anand Sarwate (Finance)

Anand Sarwate (Finance)

Erin Rhode (Director)

Erin Rhode (Director)

DSC_0992 copy

Benjamin O’Connor (IT and Infrastructure)

MysteryHunt_6

Pranjal Vachaspati (Operations and Logistics)

DSC_0970 copy

Laura Royden (Theme)

DSC_0971 copy

Harvey Jones (Quality Control)

3 Comments

New Job Observations: Farewell Harmonix, Hello TripAdvisor.


I know it’s April already, but happy new year everyone!

For those not in the know, I got a somewhat unexpected new job prospect (and offer, which I accepted) at the end of 2012. Since then, I’ve been a senior member of the technical operations team at TripAdvisor.

TripAdvisor is the world’s largest travel site, with over 100 million reviews, and over 100 million unique users per month. For people keeping track, this is the third company I worked for during the year 2012, and all three have been mentioned on The Office (Linden Lab [Second Life], Harmonix [Guitar Hero / Rock Band], and now Tripadvisor [check out the Schrute Farms episode]). However, it’s not just popularity or “hipness” that led me to shift around.

I like to tell people (and recruiters) that I have four rules for picking a place to work:

  • Must not be generally evil or tending towards evil (in my opinion) — This rules out most banks or the pharmaceutical industry, any petrochemical comany, and probably currently most of Google and Facebook.
  • Must be a profitable venture — I’m too old to play the startup risk game.
  • Must be accessible to my apartment in Boston via public transportation commute of <30 minutes — I don’t own a car, don’t want one, and I’m not moving anywhere.
  • Operations and Systems must be critical to the core business and of the highest priority — My job is best executed when it has the highest respect and attention of the company and management (immediate as well as upper).

It was that last one that I forgot about when I ended up at Harmonix. After being at Linden for 4+ years, I could feel myself falling into the crotchety grizzled BOFH sysadmin role. Come to think of it, that probably happens to anyone in my field after a few years in an organization. Spending a year at Harmonix was a great chance to broaden my horizons, relax, and experience new perspectives on things. As I stated in an earlier blog post, I loved working there, and I do miss the place, people, and incredibly fun things happening in their awesome Central Square office. At TripAdvisor, we’re still in the business of providing joy to people. Rather than by selling some of the best video games, this time it’s by helping folks plan and take vacations.

Very similar to my time at Linden Lab, when I told people that I worked at Harmonix (makers of Rock Band and Dance Central franchises) the first response was usually “wow that’s really cool.” However, the second response was more often than not, “are they still relevant? What are they working on now?” While it’s true that the heyday of plastic instruments (and maybe console gaming in general — according to some naysayers) has passed, I’m still rooting for the folks over there, and I happen to know that they are still a vital, awesome independent studio with the best people and some blow-your-mind projects in the pipeline. If I was still there, I’d be hustling along side them doing my best to keep up and push forward the state of game network interaction and back ends. That being said, the effort that game developers (particularly independents) put into network features, operations, and backends is decreasing over time. And it should be. Great games are great because of the focus on art, gameplay, story, and other intangibles. Console manufacturers and third-party contractors can be brought on to do the job now of multi-player matchmaking and scoreboard databases, letting game makers stick to making awesome games and fostering and maintaining player communities — both things that Harmonix has done and will continue to do very well.

What drew me out to TripAdvisor (other than the folks I already know who work there — hi Laura and Drew!) was the scale. Honestly, I missed the excitement and challenges of running a huge infrastructure. At its peak, Second Life consisted of three data centers, 12,000+ servers, and received a new rack of 40 servers or so every couple of weeks. TripAdvisor isn’t quite that big infrastructure-wise (although we have 5 times as many employees), but we serve 2 billion ads a year, and are peaking at 600k web requests per minute (and growing tremendously still year-over-year). The company has a weekly release cycle, an innovative and freewheeling engineering culture, and an unofficial motto of “speed wins.”

At first, being a somewhat methodical systems engineer, the concept of putting velocity in front of “correctness” scared me a little bit. I’ve focused on things like proper cabling, thorough documentation, long planning cycles, enforcing automation prior to production, eliminating waste, etc. Here, though, I quickly learned that it’s important to keep moving and to cut a little slack to the folks that came before me for bad cabling, some missing documentation, or leaving a half dozen underutilized or unused servers around (sometimes literally powered-off in the racks or on the floor) while buying new ones in a hurry. If everyone takes the extra time (myself included) to do things the absolute correct way, we’ll lose our competitive advantage and then I’d be out of a job. So yeah, speed does win.

At this point, I’d be remiss if I didn’t offer you all potential jobs here. So, visit TripAdvisor Careers, find something you want to do, and drop me a line if I know you — I’d love to give a few hiring referrals, and yes we are hiring like crazy as the company expands!

1 Comment

MIT.edu and IS&T Fail


So, my team won the 2013 MIT IAP Mystery Hunt this past weekend.  More on that in another post though. 

On my way into work today after sending some emails about mystery hunt infrastructure, I started thinking on something and it seriously pissed me off.  Our team, and other teams running mystery hunt in recent years have been unable or unwilling to use MIT network and systems infrastructure to run the activity, and have instead needed to use private and other funding sources to host our hunting and collaboration tools with other internet hosting providers. This morning, I realized that our team should be able to have this event hosted at MIT next year, and that’s the angle I want to take when I start politely talking to the new leadership at IS&T. I just can’t find out who any of them are right now or their email addresses since their website is down. Not that email would necessarily get to them anyway. Even before the hunt, I was working on an email to send and post to open forums about these things, but now that the weekend is over and I’m facing the daunting task of running the system next year for our team, here goes:

An open letter to the Acting Director of MIT IS&T (if any person exists), and the MIT Administration:

Why does the MIT Mystery Hunt need to be hosted at EC2 or get sponsorship and infrastructure from VMware, Google, or Rackspace in the first place? MIT is still, in my opinion, the world’s preeminent engineering institution, yet its inability to host something as relatively mundane as a student-run puzzle hunt activity (yes the largest in the world, but still it’s just a puzzle hunt on a web site) in 2014 would be an absolute embarrassment.

Currently IS&T’s web site itself is down, email delivery between MIT and the rest of the world is spotty, the tech’s web site is inaccessible, 3down (the institute’s site-outage notification site) itself is down, and most other MIT-related and hosted web sites other than the front page are also inaccessible. The director of IS&T has resigned (apparently not as a result of these issues, The Tech reports — I’d link to the article but, well, you know, the site’s down).

The administration and IS&T will surely blame the DDOS (distributed denial of service) attack and anonymous (the amorphous organization out there on the web organizing these attacks) for all of this. Yet a site like WBC (Westboro Bible Church) has been the target of attacks for weeks now and none of their hateful websites are down or have needed to be mangled, or have been compromised or vandalized as badly as MIT’s have. Several other websites and companies (some of which I have worked for or currently work for) are also regularly targets of DDOS attacks and yet remain generally accessible and organized in the face of even the worst full-frontal internet assaults. IS&T’s response to the DDOS attack has been, from external appearances and my experiences on campus this weekend at least, worse than the attack itself. Continued vast service outages, intermittent detachment of MIT from email systems around the world, and zero effective communication with customers, departments, and students as far as I can tell. DDOS attacks are a fact of life on the internet. They should, like anonymous itself, be respected but above all expected.

At MIT, departments like the Broad Institute, Media Lab, CSAIL, etc. all have split off from MIT’s network and computing infrastructure because of IS&T’s apparent perennial failure as a service organization worthy of MIT and the people that work and study there.  Here’s an anecdotal example of the kind of failures that these departments and organizations expect:

When I was a Systems Administrator at IS&T, installing what passed as a small-sized supercomputer into an IS&T server room (hosted for the department of Biology if my memory serves correctly) caused a fire, power outage, and a rushed redesign of the power infrastructure, followed by several more power outages.

Universities like UIUC, the state systems at California and Florida, and Universities of Wisconsin, and Ohio all have well known, fully operational technology incubators and “startup factories” on their campuses connected and serviced through their network services, infrastructure, and hosting and IT departments. As the birthplace of so many startups, ideas, and technologies, it’s shameful that something like this can apparently not exist on the MIT network under the umbrella of IS&T in its current form. Is this because of current administration and management’s short-sightedness, entrenchments, technological incompetence, or a combination of all of the above?

With my team winning the 2013 MIT Mystery Hunt, we are already starting to look towards the 2014 Hunt and the network and computing services it will require. By engaging with the new leadership at IS&T, our team should be able to use the actual MIT infrastructure to give us what we need and want for a successful activity that showcases MIT to all of the world. We want to be able to have this event hosted on MIT’s network. Anything else should be an embarrassment to whoever is in charge of IS&T as well as the rest of the Institute’s administration.

-Benjamin O’Connor

  • Senior Systems Engineer, Tripadvisor (Formerly at Harmonix, Linden Lab, UIUC, NSA, and MIT IS&T)
  • Systems Infrastructure Manager, MIT Mystery Hunt Team <full text of Atlas Shrugged>
  • MIT Class of 2000

1 Comment