Early 2000s server porn (uiuc.edu mail backend)


Saw this link posted over the weekend:

http://www.detritus.org/mike/gc/
And it reminded me of what things looked at back at the University of Illinois while I was there (2001-2005).  I posted a good “war story” about this beast back several years ago:

http://www.intotheweeds.org/2007/03/into_the_weeds_circa_2002.html
But here are some pictures. Those are MTI 2300 and 8300 arrays. Each of those bays is only 4.5 – 9GB max, and the whole rack of 2300s was less than a TB. 60,000+ email accounts, pure sendmail frontends with special directory lookup plugins (for UIUC’s custom “ph” directory service), and that whole rack of Sun Solaris beasts to run POP and IMAP and unix home directory storage for the whole campus.  Good times:


Leave a comment

πTZ


I get questions.

My wife and I are in the middle of remodeling a 19th century victorian in Cambridge, MA (see http://rebuildingwheneverland.wordpress.com for that story).  Folks ask things like “where did you two meet?”, “where did you learn to use all of these tools and do all of this building stuff?”, “what was life like at MIT?”

I am a putzen.  MIT class of 2000.  I lived at East Campus, specifically the hall known as “putz”, “PTZ”, “πTZ”, or Second West and this is an attempt to explain what that means.

Here’s a panoramic picture of H204, the room I lived in during most of college.  This picture was taken the year after I graduated, after I had passed it on to the esteemed Mr. Ryan Williams (a.k.a. “breath”):

h204room_panorama

On August 21st 1996, I arrived from the snowbound hellscape that was my childhood home of Rochester, New York and walked into the student center at MIT in Cambridge, MA.  Back then when you got to MIT you didn’t know where you were going to live and classes didn’t start for another two weeks.  This was a wonderful time called “R/O” or Campus Rush / Orientation.  All of the undergraduate dorms and frats (yes you could move right into a frat then as a freshman at MIT — this was before some unfortunate idiot drank himself to death in 1997 and ruined everything for everyone else) competed for the freshmen and we got to tour around, get free food, and pick where we’d live.

I had a “temp” room in a nice clean, ordinary boring dorm about a 20 minute walk from main campus.  I immediately fell in love with a rough-and-tumble, less maintained, cheaper-to-live-in, dorm that was closer to classes called “MIT East Campus Alumni Memorial Housing” but more generally known as East Campus, or just EC.  However, it was not my first choice for the housing lottery (or even my second).  I didn’t think I was weird or quirky enough to live there, and honestly it’s a little bit intimidating for a 17 year old kid from the suburbs, not to mention a little bit dirty and grubby.

They build roller coasters in the courtyard.  The walls are covered with hand-painted murals done by residents every year.  Two of the ten “halls” allowed smoking (and still do).  Several even let students bring cats!  Students regularly build elaborate “lofts” in their room out of scrap wood, allowing them to sleep on an elevated platform while having more studying and living space underneath.  They blow stuff up in the dorm courtyard on a semi-regular basis.

Here’s the EC “dorm rush” video from last year.  Please note that pretty much none of the stuff in this video was done just for the video.  This was filming of regular dorm activities like campus rush, spring picnic, fred fest, and regular everyday merriment:

Here’s an older one featuring an actual roller coaster and showing that really not much has changed in the past 10 years:

Fate ended up smiling upon me and I ended up getting assigned to East Campus in the housing lottery.  A few days later, in the courtyard as all of us new residents were gathering around on the benches and tables out there, a group of 5 or 6 of us coalesced and started talking.  We would all end up becoming friends for the next four years (some of us for life), and one of them would eventually become my wife.

But hang on, I haven’t even gotten to putz yet. For rush at MIT not only did we get to pick what dorm we lived in, once we got to EC we got to (and this is still the case) pick what hall we wanted to live in.  This means that each of the 10 halls ends up with its own unique personality.  As I mentioned earlier, some allow smoking, some allow cats.  Some are quiet, some are more mechanically or technically oriented, some are more into sports (intramural or otherwise), etc. etc.  East Campus is a dorm with two separate parallel buildings (east and west) with 5 floors each.  Thus the floors are designated with the numbers 1-5 (by floor) and East or West (based on what parallel the floor is in).

I chose and ended up on Second West.  For rush every year they line of the hall with a tarp, put a bunch of mattresses on the end, get out a hose and some soap and do an indoor slip-n-slide.  They did this 20 years ago when I was a wee frosh and they still do it today.  Some traditions run deep.  Second West is also known affectionately by its nickname “putz” or πtz (story of this nickname follows later).

Every year for the past 23 years (at least) on the Saturday or Sunday of the weekend before thanksgiving on 2nd West, something special happens.  It’s called “putzgiving” and I’ve been there every year for the last 11 of them (while I’ve been back living in Boston).  For the past 5 years I’ve made a turkey for it and helped to feed the roughly 60-80 current residents and alums that come back every year.  It’s a fabulous delicious feast of food, camaraderie, and reminiscing.  For the putzgiving celebrating the somewhat arbitrary “20th year of putz” back in 2012, Mark Feldmeier, one of the older alums (even older than me) gave a brief speech marking the occasion and touching a little bit upon what makes putz “different”:

Putz isn’t a frat, but it bears some communal characteristics similar to one.  In fact, in the early days, the letters πTZ were put on signs and t-shirts to make fun of fraternities.  A regular pastime of the hall during old-time frat rush was to go out and steal the signs of real fraternities and paint over them with πTZ in large letters.  But as the Big Chicken (Mark) says in that video, over the past 20 years things have changed a lot but the community of putz remains.  There’s a longevity to it, and it’s remarkably cool that people come back for events, get together out of town, form business together, write songs, go to each others weddings, or even get married and have children themselves.

What else is there to say?

Always mechanically and/or computationally oriented, putz has built robots, constructed many in-room lofts of varying construction quality, and in one case even chopped a huge hole in a room wall to put in a fish tank (which I think might still be there).  In 1996 the oldest continually operating webcam sprung to life (https://intotheweeds.org/2012/11/22/a-look-back-in-time-golden-age-of-the-internet/).  In 2000 a large music storage and playing linux webserver and sound system was built to allow for remote queueing, organization, and streaming of music into the hall’s lounges and bathrooms.  In 2005, the Time Travelers Convention was held and even got mocked on SNL by Tina Fey.  In both 2004 and 2014 Putz/East Campus teams won the annual MIT Mystery Hunt.  Putz alums have been featured on Jeopardy (unfortunately losing) and Battlebots (also, unfortunately, losing) over the years as well.  We also have putzen who have proudly served in the military, in the priesthood, and just about every field in between.

I’ll close this post with a bit of trivia.  A message from Abe Farag explaining a bit about the origin and history of πTZ:

“…The 1994 class of 2nd west attracted some calm academic kids- Physics majors: Gunther, Rob, Joe, David C – who moved floors I think? (Oh my this was a long time ago)
But somehow in the 1995 class 2nd west attracted a cast of characters with some Spunk. I don’t know how that happened. Like the cosmos forming new stars out of the either. Maybe a masterpiece is easier to create on a blank palette then one already crowded with some message.

The 1996 rush was even fun & then with the 1996 class a flood gate of fun rolled in. I have no idea how “PTZ” started. I did play a lot of hack hockey & did poke fun a lot of our D-level game. I’m honored to think I might have started PTZ- but I don’t think it was me.

If I called us Putzes it was just to be silly. Not to start a club. 

In early 1992 Gunther made a hall t-shirt that said “We’d make bad elves” which was a drawing of an elv that had built a 3 armed doll & santa hitting him. I think of that as a precursor of the feeling of us being Putzes if not the exact name. We were so lame that we made t-shirts of our IM teams that made fun of how lame we were. But I’m sure that Mark yelling was a big part of the ethos of Putz. YO Mark.

However the name bubbled up I think PTZ stuck because of the fun link to pseudo Frat & rush & because the feeling of not having an identity at the time – of being an outcast floor- which we were at the time.

I also think it was 1993 Rush that some Brave & BOLD Putzen went to the great Killian dome RUSH kickoff event with a PTZ sign & PTZ t-shirts & like pied pipers lead some incoming kids to ec. I think 1 tenet of PTZ is that Making Physical stuff leads to fun. That was a Pivotal moment for PTZ.

To all Yee Putz… enjoy the great legacy you are part of.
GO forth & Putz.
+Abe 
PTZ- 1994.”

1 Comment

The Work That Makes Civilized Life Possible (and finding the people to do it) [REPOST]


So in honor of Systems Administrator Appreciation Day, and because I have a new job in production systems at Athenahealth (more about that at the bottom of this post), I’m slightly modifying and reposting a blast from the recent-past.

“So what exactly would you say you do here?”

I’ve flown out to remote locations and been on-site for the build-out and spin-up of three new production data centers within the last year. I’ve been present for load tests and at public launches of new video games’ online services and product and feature launches to predict and solve system load issues from rushes of new customers hitting new code, networks, and servers. And yes, I’ve spent my share of all-nighters in war rooms and in server rooms troubleshooting incidents and complicated failure events that have taken parts of web sites, or entire online properties offline. I wasn’t personally involved in fixing healthcare.gov, but that team was made up of people I would consider my peers, and some people that have specifically have been co-workers of mine in the past.

Do you use the internet? Ever buy anything online? Use Facebook? Have a Netflix account? Ever do a search on Duck Duck Go or use Google? Do you have a bank account? Do you have a criminal record, or not? Ever been pulled over? Have you made a phone call in the past 10 years? Is your metadata being collected by the NSA? Have you ever been to the hospital, doctor’s office, or pharmacy? Do you play video games? If you’ve answered yes to any of the above questions, then a portion of your life (and livelihood) depend on a particular group of professional engineers that do what I do. No, we are not a secret society of illuminati or lizard people. We do, however, work mostly in the background, away from the spotlight, and ensure the correct operation of many parts of our modern, digital world.

So what do we call ourselves? That’s often the first challenge I face when someone asks me what I do for a living. My job titles, and the titles of my peers, have changed over the years. Some of us were called “operators” back in the early days of room-sized computers and massive tape drives. When I graduated college and got my first job I was referred to as a “systems administrator” or “sysadmin” for short. These days, the skill sets required to keep increasingly varied and complex digital infrastructure functioning properly have become specialized enough that this is almost universally considered a distinct field of engineering rather than just “administration” or “operations”. We often refer to ourselves now as “systems engineers,” “systems architects,” “production engineers,” or to use a term coined at Google but now used more widely, “site reliability engineers.”

What does my job entail specifically? There are scripting languages, automated configuration and server deployment packages, common technology standards, and large amounts of monitoring and metrics feedback from the complex systems that we create and work on. These are the tools we need to scale to handle growing populations of customers and increased traffic every day. This is a somewhat unique skill set and engineering field. Many of us have computer science degrees (I happen to), but many of us don’t. Most of the skills and techniques I use to do my job were not learned in school, but through my years of experience and an informal system of mentorship and apprenticeship in this odd guild. I wouldn’t consider myself a software engineer, but I know how to program in several languages. I didn’t write any of the code or design any of our website, but my team and teams like it are responsible for deploying that code and services, monitoring function, making sure the underlying servers, network and operating systems function properly, and maintaining operations through growth and evolution of the product, spikes in traffic, and any other unusual things.

“Skill shortage”

Back in 2001, I was working for the University of Illiniois at Urbana-Champaign for the campus information services department (then known as CCSO) as a primary engineer of the campus email and file storage systems. Both were rather large by 2001 standards, with over 60,000 accounts and about a terabyte (omg a whole terabyte!) of storage. This was still in the early part of the exponential growth of the internet and digital services. I remember a presentation by Sun Microsystems in which they stated that given the current growth rates and server/admin ratios, by 2015 about ⅓ of the U.S. Population would need to be sysadmins of some sort. They were probably right, but the good news is that since then our job has shifted mostly to finding efficiencies and making the management of systems and services of ever-growing scales and complexity possible without actual manual administration or operation — so the server/admin ratio has gone down dramatically since then. Back then it was around 1 admin for every 25 servers in an academic environment like UIUC. Today, the common ratios in industry range from a few hundred to a few thousand servers per engineer. I don’t think I’m allowed to say publicly what the specific numbers are here at TripAdvisor, but it is within that range. But, we still need new engineers every day to meet needs as the internet scales, and as we need to find even more efficiencies to continue to crank that ratio up.

Where do the production operations engineers come from? Many of us are ex-military, went to trade schools, or came to the career through a desire to tinker unrelated to college training. As I stated earlier, while a degree in computer science helps a lot understanding the foundations of what I do, many of the best engineers I’ve had the pleasure of working with are art, philosophy, or rhetoric majors. In hiring, we look for people who have strong problem solving desires and abilities, people who handle pressure well, who sometimes like to break things or take them apart to see how they work, and people who are flexible and open to changing requirements and environments. I believe that, because for a while computers just “worked” for people, a whole generation of young people in college, or just graduating college, never had the need or interest to look under the hood at how systems and networks work. In contrast, while I was in college, we had to compile our own linux kernels to get video support working, and do endless troubleshooting on our own computers just to make them usable for coding and, in some cases, daily operation on the campus network.

So generally speaking, recent college graduates trained in computer science have tended to gravitate towards the more “glamorous” software engineering and design positions, and continue to. How do we attract more interest in our open positions, and in the career as an option as early as college? I don’t have a good answer for that. I’ve asked my peers, and many of them don’t know either. I was thrilled to go to the 2014 SREcon in Santa Clara last year (https://www.usenix.org/conference/srecon14), and to attend and present at SREcon 2015 this year. A huge portion of the discussion panels there and the engineers and managers there from all the big Silicon Valley outfits (Facebook, Google, Twitter, Dropbox, etc.) face the same hiring problems. It’s admittedly even worse for us at TripAdvisor as an east coast company fighting against the inexorable pull of Silicon Valley on the talent pool here.

One thing I’ve come to strongly believe, and which I think is becoming the norm in industry operations groups, is that we need to broaden our hiring windows more. We need to attract young talent and bring in the young engineers, who may not even be strictly sure that they want an operations or devops career, and show them how awesome and cool it really is (ok, at least I think it is). To this end, I gave a talk at MIT a little over a year ago on this subject — check out the slides and notes here. I didn’t know that this is what I wanted to do for sure until about a week before I graduated from MIT in 2000. I had two post-graduation job offers on the table, and I chose a position as an entry-level UNIX systems administrator at Massachusetts General Hospital (radiation oncology department, to be more specific) over the higher paying Java software engineering job at some outfit named Lava Stream (which as far as I can tell does not exist anymore). Turns out I made the right decision. The rest of my career history is in my LinkedIn profile (https://www.linkedin.com/profile/view?id=8091411) if anyone is curious. No, I’m not looking for a new job.

“Now (and forever) Hiring”

So, if anyone reading this is entering college, or just leaving college, or thinking of a career change, give operations some consideration. Maybe teach yourself some Linux skills. Take some online classes if you have time or think you need to. Brush up your python and shell scripting skills. At least become a hobbyist at home and figure out some of those skills you see in our open job positions (nagios, Apache, puppet, Hadoop, redis, whatever). Who knows, you might like it, and find yourself in a career where recruiters call you every other day and you can pretty much name your own salary and company you want to work for.

And specifically for my group at Athenahealth? We manage the production infrastructure for athenaNet. We are a cloud-based medical services company working to build the healthcare internet and to improve healthcare in the U.S. through our innovation. We are the #1 practice management system and #2 electronic health record (EHR) system in the country according to the 2014 KLAS survey. The infrastructure that my team runs is counted on and trusted by over 67,500 medical providers and millions of patients. Does any of this sound interesting to you? Even if you don’t think you fit any of the descriptions we have currently listed (like this one) but might be up for some mentoring/training and maybe an internship or more entry-level position, tweet at me or drop me an email and we’ll see what we can do. See you out there on the internets.

Leave a comment

It “snow” comparison


UPDATE: Here is a composite picture that tells part of the story. Click for the full-size image. On the left is from last weekend (February 7th, 4:11pm). On the right is this morning (February 15th, 11:15am). The walls and mountains of snow just keep growing. This includes another 14″ or so that has been added in the past 24 hours (after writing the blog post below).

blizzard

Please excuse the pun in the title (or don’t). If you haven’t heard, Boston has gotten an unprecedented amount of snow over the past three weeks and will probably end up with about 70″ within a 30-day period.

I come from Rochester, New York. Depending on your statistics, it is considered one of the snowiest cities in the U.S (and sometimes is #1 on that list). So my thought has been: have I just been living outside of the snow belt for so long that what used to not be a big deal in my normal winter experience is now completely bizarre to me?

Let’s start by looking at the current leaders of this season’s “snow bowl“. Boston is right there in between Buffalo, Syracuse and Rochester, and is behind the total amount of snow that snow-belt cities Buffalo and Syracuse had at this point last year, and about equal with Rochester. So yes, this is an unbelievably large amount of snow for Boston, but not unprecedented for some urban areas.

The statements so far from the mayor and officials from the local mass transit authority (MBTA) indicate that the real issue here has been an inability to clear out snow from streets, train tracks, and to maintain equipment which the snow and cold has damaged. Mass transit is completely shut down tomorrow again, and a snow emergency continues. This is a good move, in my opinion, because we need an extra day of cars and people not being out there so that the city can do something about removing the mountains of snow.

Re-opening the city so soon after the first storm two weeks ago was, in retrospect, what got us into the current mess. You can only shove so much snow aside before there’s just too much of it and the streets narrow to be impassable. This is precisely the situation we were in here for most of last week. Traffic was at a standstill as two-lane streets became one (or even 1/2 lane) with parked cars and mountains of snow competing for road space. Lets hope the crews make some good progress cleaning up the streets and sidewalks tomorrow while we’re all home from work. I also hope that the embattled MBTA can get its act together. However the lack of budgetary attention paid to that agency from the state over the past few years, and downright animosity from residents of the western suburban and rural part of the state when faced with their tax dollars paying for “city” infrastructure has left the agency in a spot where they just don’t have enough resources to remove all of the snow and repair/maintain equipment. By some estimates last week about half of the trains on certain subway lines were out of service due to weather-related malfunction.

So, good luck to them. And they should get a move-on because the current NOAA forecast discussion indicates a good chance of a potentially significant storm this coming Thursday, and another one Saturday into Sunday.

Now, back to my original question about 70″ in a month. I looked around for record of this happening in a city before (particuarly a major city, like Boston). I found a few of similar incidents in smaller cities though — all in the “snow belt” region of Buffalo-Rochester-Syracuse, of course. Back in 1985, Buffalo had 68 inches of snow in December. That’s less than we’re facing here now, and in a city that’s smaller and probably has a much easier time with snow removal and street clearing (not to mention no large subway/trolley system to also keep clear). In December 2001, the city of Buffalo had a record 83 inches of snowfall, with a maximum of 44 inches on the ground at one point. That seems to indicate to me that there was some sort of thawing in between — a luxury we have not had here in the city of Boston over the past three weeks. In Syracuse, however, there was also a 64 inch December, and supposedly a 97 inch January back in 1966 that I really want to find and read some more about. But other than those few anomalies? I couldn’t find a month of more than 52 inches of snow for Buffalo, Rochester, or Syracuse.

So yeah, 70″ in a less than a month is very very rare for *any* place — even the snowiest cities in this country that deal with blizzards regularly. It’s certainly unprecedented for a major city with a multi-mode mass-transit system and a population over 640,000 (4.5 million in the “greater boston” MSA). In other words, I haven’t gone soft. This really is a whole lot of snow.

Leave a comment

Ranking The Months From Best To Worst


From best to worst:

1. September

2. April

3. October

4. August

5. July

6. June

7. May

8. November

9. January

10. December

11. March

12. February

Agree?  Disagree?  Discuss!

3 Comments

Gmail Password Leak Update


Still thinking two-factor auth for Google (and other accounts) isn’t worth the trouble? Might be time to think again. http://www.google.com/landing/2step/

WordPress.com News

This week, a group of hackers released a list of about 5 million Gmail addresses and passwords. This list was not generated as a result of an exploit of WordPress.com, but since a number of emails on the list matched email addresses associated with WordPress.com accounts, we took steps to protect our users.

We downloaded the list, compared it to our user database, and proactively reset over 100,000 accounts for which the password given in the list matched the WordPress.com password. We also sent email notification of the password reset containing instructions for regaining access to the account. Users who received the email were instructed to follow these steps:

  1. Go to WordPress.com.
  2. Click the “Login” button on the homepage.
  3. Click on the link “Lost your password?”
  4. Enter your WordPress.com username.
  5. Click the “Get New Password” button.

In general, it’s very important that passwords be unique for each account. Using the same…

View original post 155 more words

Leave a comment

Bought a House — See New Blog!


So, for those who haven’t been privy to the news, Kristy and I bought a house.  Rather than clog up this blog with that stuff though, I’ve started a new one:

http://rebuildingwheneverland.wordpress.com

First post is up with the basic story and some “before” pictures.  We close officially tomorrow afternoon, and moving day is September 23rd.  As you can see if you look at that blog and gallery, we’ve got a lot of work ahead of us!

House From The Street

House From The Street

Leave a comment

The Work That Makes Civilized Life Possible (and finding the people to do it)


“So what exactly would you say you do here?”

I’ve flown out to remote locations and been on-site for the build-out and spin-up of three new production data centers within the last 10 months. I’ve been present for load tests and at public launches of new video games’ online services and product and feature launches to predict and solve system load issues from rushes of new customers hitting new code, networks, and servers. And yes, I’ve spent my share of all-nighters in war rooms and in server rooms troubleshooting incidents and complicated failure events that have taken parts of web sites, or entire online properties offline. I wasn’t personally involved in fixing healthcare.gov late last year, but that team was made up of people I would consider my peers, and some people that have specifically have been co-workers of mine in the past.

Do you use the internet? Ever buy anything online? Use Facebook? Have a Netflix account? Ever do a search on Duck Duck Go or use Google? Do you have a bank account? Do you have a criminal record, or not? Ever been pulled over? Have you made a phone call in the past 10 years? Is your metadata being collected by the NSA? Have you ever been to the hospital, doctor’s office, or pharmacy? Do you play video games? If you’ve answered yes to any of the above questions, then a portion of your life (and livelihood) depend on a particular group of professional engineers that do what I do. No, we are not a secret society of illuminati or lizard people. We do, however, work mostly in the background, away from the spotlight, and ensure the correct operation of many parts of our modern, digital world.

So what do we call ourselves? That’s often the first challenge I face when someone asks me what I do for a living. My job titles, and the titles of my peers, have changed over the years. Some of us were called “operators” back in the early days of room-sized computers and massive tape drives. When I graduated college and got my first job I was referred to as a “systems administrator” or “sysadmin” for short. These days, the skill sets required to keep increasingly varied and complex digital infrastructure functioning properly have become specialized enough that this is almost universally considered a distinct field of engineering rather than just “administration” or “operations”. We often refer to ourselves now as “systems engineers,” “systems architects,” “production engineers,” or to use a term coined at Google but now used more widely, “site reliability engineers.”

What does my job entail specifically? There are scripting languages, automated configuration and server deployment packages, common technology standards, and large amounts of monitoring and metrics feedback from the complex systems that we create and work on. These are the tools we need to scale to handle growing populations of customers and increased traffic every day. This is a somewhat unique skill set and engineering field. Many of us have computer science degrees (I happen to), but many of us don’t. Most of the skills and techniques I use to do my job were not learned in school, but through my years of experience and an informal system of mentorship and apprenticeship in this odd guild. I wouldn’t consider myself a software engineer, but I know how to program in several languages. I didn’t write any of the code or design any of our website, but my team and teams like it are responsible for deploying that code and services, monitoring function, making sure the underlying servers, network and operating systems function properly, and maintaining operations through growth and evolution of the product, spikes in traffic, and any other unusual things.

“Skill shortage”

Back in 2001, I was working for the University of Illiniois at Urbana-Champaign for the campus information services department (then known as CCSO) as a primary engineer of the campus email and file storage systems. Both were rather large by 2001 standards, with over 60,000 accounts and about a terabyte (omg a whole terabyte!) of storage. This was still in the early part of the exponential growth of the internet and digital services. I remember a presentation by Sun Microsystems in which they stated that given the current growth rates and server/admin ratios, by 2015 about ⅓ of the U.S. Population would need to be sysadmins of some sort. They were probably right, but the good news is that since then our job has shifted mostly to finding efficiencies and making the management of systems and services of ever-growing scales and complexity possible without actual manual administration or operation — so the server/admin ratio has gone down dramatically since then. Back then it was around 1 admin for every 25 servers in an academic environment like UIUC. Today, the common ratios in industry range from a few hundred to a few thousand servers per engineer. I don’t think I’m allowed to say publicly what the specific numbers are here at TripAdvisor, but it is within that range. But, we still need new engineers every day to meet needs as the internet scales, and as we need to find even more efficiencies to continue to crank that ratio up.

Where do the production operations engineers come from? Many of us are ex-military, went to trade schools, or came to the career through a desire to tinker unrelated to college training. As I stated earlier, while a degree in computer science helps a lot understanding the foundations of what I do, many of the best engineers I’ve had the pleasure of working with are art, philosophy, or rhetoric majors. In hiring, we look for people who have strong problem solving desires and abilities, people who handle pressure well, who sometimes like to break things or take them apart to see how they work, and people who are flexible and open to changing requirements and environments. I believe that, because for a while computers just “worked” for people, a whole generation of young people in college, or just graduating college, never had the need or interest to look under the hood at how systems and networks work. In contrast, while I was in college, we had to compile our own linux kernels to get video support working, and do endless troubleshooting on our own computers just to make them usable for coding and, in some cases, daily operation on the campus network.

So generally speaking, recent college graduates trained in computer science have tended to gravitate towards the more “glamorous” software engineering and design positions, and continue to. How do we attract more interest in our open positions, and in the career as an option as early as college? I don’t have a good answer for that. I’ve asked my peers, and many of them don’t know either. I was thrilled to go to the 2014 SREcon in Santa Clara earlier this month (https://www.usenix.org/conference/srecon14), and for the most part the discussion panels there and the engineers and managers there from all the big Silicon Valley outfits (Facebook, Google, Twitter, Dropbox, etc.) face the same problem. It’s admittedly even worse for us at TripAdvisor as an east coast company fighting against the inexorable pull of Silicon Valley on the talent pool here.

One thing I’ve come to strongly believe, and which I think is becoming the norm in industry operations groups, is that we need to broaden our hiring windows more. We need to attract young talent and bring in the young engineers, who may not even be strictly sure that they want an operations or devops career, and show them how awesome and cool it really is (ok, at least I think it is). To this end, I gave a talk at MIT a little over a year ago on this subject — check out the slides and notes here. I didn’t know that this is what I wanted to do for sure until about a week before I graduated from MIT in 2000. I had two post-graduation job offers on the table, and I chose a position as an entry-level UNIX systems administrator at Massachusetts General Hospital (radiation oncology department, to be more specific) over the higher paying Java software engineering job at some outfit named Lava Stream (which as far as I can tell does not exist anymore). Turns out I made the right decision. The rest of my career history is in my LinkedIn profile (https://www.linkedin.com/profile/view?id=8091411) if anyone is curious. No, I’m not looking for a new job.

“Now (and forever) Hiring”

So, if anyone reading this is entering college, or just leaving college, or thinking of a career change, give operations some consideration. Maybe teach yourself some Linux skills. Take some online classes if you have time or think you need to. Brush up your python and shell scripting skills. At least become a hobbyist at home and figure out some of those skills you see in our open job positions (nagios, Apache, puppet, Hadoop, redis, whatever). Who knows, you might like it, and find yourself in a career where recruiters call you every other day and you can pretty much name your own salary and company you want to work for.

And specifically for my group at TripAdvisor? We manage the world’s largest travel site’s production infrastructure. It’s a fast-moving speed-wins type of place (see my previous blog post) and we are hiring. Any of this sound interesting to you? Even if you don’t think you fit any of the descriptions below but might be up for some mentoring/training and maybe an internship or more entry-level position, tweet at me or drop me an email and we’ll see what we can do. See you out there on the internets.

Job Opening: Technical Operations Engineer

TripAdvisor is seeking a senior-level production operations engineer to
join our technical operations team. The primary focus of the technical
operations team is the build-out and ongoing management of Tripadvisor’s
production systems and infrastructure.

You will be designing, implementing, maintaining, and troubleshooting
systems that run the world's largest travel site across several
datacenters and continents. TripAdvisor is a very fast growing and
innovative site, and our technical operations engineers require the
flexibility, and knowledge to adapt to and respond to challenging and
novel situations every day.

A successful candidate for this role must have strong system and network
troubleshooting skills, a desire for automation, and a willingness to
tackle problems quickly and at scale all the way from the hardware and
kernel level, up the stack to our database, backend, web services and
code.

Some Responsibilities:
- Monitoring/trending of production systems and network
- General linux systems administration
- Troubleshooting performance issues
- DNS and Authentication administration
- Datacenter, network build-outs to support continued growth
- Network management and administration
- Part of a 24x7 emergency response team

Some Desired Qualifications:
- Deep knowledge of Linux
- Experienced in use of scripting and programming languages 
- Experience with high traffic, internet-facing services
- Experience with alerting and trending packages like Nagios, Cacti
- Experience with environment automation tools (puppet, kickstart, etc.)
- Experience with virtualization technology (KVM preferred)
- Experience with network switches, routers and firewalls

Job Opening:  Information Security Engineer

TripAdvisor is seeking an Information Security Engineer to join our 
operations team. You will be charged with the responsibilities for 
overall information security for all the systems powering our sites, the 
information workflow for the sites and operational procedures, as well 
as the access of information from offices and remotes work locations.

Do you have the talent to not only design, but actually implement and 
potentially automate firewall, IDS/IPS configuration changes and manage 
day-to-day operations? Can you implement and manage vulnerability scans, 
penetration tests and audit security infrastructure?

You will be collaborating with product owners, product engineers, 
operations engineers to understand business priorities and goals, company 
culture, development processes, operational processes to identify risks 
and then work with teams on designing and implementing solutions or 
mitigations. You will be the information security expert in the company 
that track and monitor new/emerging vulnerabilities, exploitation 
techniques and attack vectors, as well as evaluate their impacts on 
services in production and under development. You will provide support 
for audit and remediation activities. You will be working hands-on on our 
production systems and network equipment to enact policy and maintain a 
secure and scalable environment.

Desired Skills and Experience

* BSc or higher degree in Computing Science or equivalent desired
* Relevant work experience (10+ yrs) in securing systemsand infrastructure
* Prior experience in penetration testing, vulnerability management, forensics
* Require prior experience in the area of IDS/IPS, firewall config/management
* Experience with high traffic, Internet-facing services
* Ability to understand and integrate business drivers and priorities into design
* Strong problem solving and analytical skills
* Strong communication skills with both product management and engineering
* Familiar with OWASP Top-10
* Relevant certifications (CISSP, GIAC Gold/Platinum, and CISM) a plus

1 Comment

Heartbleed, Internet Security and What it Means to You


For those not in the know, or catching any of the news stories that are popping up today in mainstream media, we are in the midst of dealing with a very serious vulnerability that has been discovered in the foundation of secure data transmission on the internet. While many of the news stories out there are filled with some ridiculous hyperbole, it would be dangerous to understate the criticality of what was discovered.

SSL (Secure Sockets Layer) is a protocol for letting your computer and other systems communicate across the internet with negotiated encryption (so people can’t snoop on your passwords and other sensitive transmitted information), and authentication (so you have a way of knowing that when you’re filling in information at your bank’s website it actually is going to your bank’s website). Anytime you’re at a website with “https” in the URL, or that little lock icon in your address bar, your communications are protected by this protocol and code running in your browser and on the server you’re communicating with works on encrypting and decrypting the information flying through the tubes. The SSL protocol was initially developed by our old friends at Netscape in the early 1990s, and is what makes e-commerce and a good portion of our modern economy and communications possible.

The Heartbleed Bug lets any attacker send a somewhat-carefully crafted message to a web server running this SSL code and get back arbitrary contents of the memory within that server. This is, sadly, not an uncommon type of bug (as anyone who has ever programmed will recognize the horror and commonality of array bounds-checking problems and buffer overflow problem). On a web server, however, some things that get returned from memory when it is poked with this attack include:

  • The web server’s secret key – This is the key that’s used to actually encrypt all traffic. If you are running a secure website and were vulnerable to this bug, in my opinion, you should assume that your key has been compromised and generate a new key and certificate for encrypting future traffic. Fortunately, due to the “authentication” part of the SSL protocol, in order to take advantage of having a server private key and certificate, you’d have to launch a “man in the middle” attack — which takes a bit more work and often involves actually penetrating the network of your victim and/or hijacking internet DNS service for your victim. Still, this is a very bad thing to leak.
  • Sensitive Information – Usernames, passwords, things filled out in forms and submitted to the website by other customers at the time the attack is launched will be present in the server memory in plaintext and can be retrieved. It’s not a bad idea to change your passwords regularly on websites anyways, but this bug might provoke you to go and do it right now
  • Session Cookies – Many secure websites keep track of which users are logged in and which aren’t by sharing a little bit of data with you known as a “cookie.” It’s pretty much a magic number that your browser can present to the website to say “hey it’s me again.” The web server will then look it up in the database to say “oh yeah, you logged in successfully a few hours ago, you’re still good.” This is how you can go to websites like facebook repeatedly and not have to enter your password over and over again. Other users’ session cookies will be present in the server memory in plaintext and can be retrieved by this attack. This is called “sidejacking” and is (in my opinion) the most frightening aspect of this bug. This blog has a more detailed example of using this vulnerability to do a sidejacking, and confirms that this is possible on at least one “fairly popular website”

This bug was disclosed in what we call a “responsible” manner. The researchers that were supposedly first to discover it did not release it to the public, but went directly to the OpenSSL project and, in turn, large stakeholders were notified several weeks ago. It can be assumed that sites like Google, Facebook, Akamai (which is good because they actually terminate a good portion of the web’s SSL — including TripAdvisor’s), and hosting providers like CloudFlare have already repaired the vulnerability before yesterday. Sadly, it appears that the publication of the vulnerability on April 7th was earlier than hoped. Linux distribution providers (Debian, CentOS, Redhat, Ubuntu) who provide the OpenSSL code packages that people like me actually have to get to install on our web servers, were not providing a fix in some cases until late in the evening on the 7th — well after exploit code was in the wild. Furthermore, while I trust the researchers listed as the discoverers of this bug, I can not (nor should anyone) be 100% certain that someone else hadn’t already discovered this problem and has been attacking websites with it for several months stealing private keys and sensitive information and credentials. So while it’s comforting that responsible disclosure and fast action on the part of the people that run the web sites you visit every day (people like me) have potentially mitigated the problem, the consequences of this vulnerability are (as you can see in the list above) far reaching and somewhat frightening.

“So as a regular person, how worried should I be?”This is a common question a lot of people have been asking in the past day or two. I can’t pretend to understand your own risk and paranoia level, but I will attempt to convey how I feel. This is not a reason to stop trusting the little lock icon in your browser or the “https” in the url. Bugs happen, sometimes information is leaked, and then they get fixed. Any damage done by this has already been done and there’s no reason to yank out your ethernet cables and delete your facebook and twitter accounts. What you should do (and should be doing already) are some common sense web security techniques. If there’s a bright side to this bug, it’s that this may increase everyone’s awareness and get people do to the following:

  • Change your passwords: This is a no-brainer. If anyone gets your account information (through this vulnerability or any other means), it’s useless if you change your passwords. I do this every few months.
  • Don’t use the same passwords on multiple sites: This is a common problem. Here at TripAdvisor the only thing your password protects is a bunch of travel reviews. You may think “oh whatever, big deal.” But research (and anecdotal evidence) shows that many people use the same exact password and username on many sites. The same username and password a user uses on TripAdvisor may very well be their gmail password, or the password for their online banking, or facebook or twitter. Websites get hacked all the time (none that I’m responsible for, of course, LOL [yes, I just typed LOL]) — sometimes without the public even knowing about it. So be smart. Even I don’t use a unique password for every website, but I have a set of four or five that I use for different classes of sites (social media password, email password, financial services password, shell login password, etc.).
  • Pick a good password: People have been saying this forever, but I will say it again. Quick story: when I was at UIUC running the campus Email and UNIX shell/file sharing services, we first ran a password cracker against our users’ accounts. The way that these “brute force” attacks work is that an attacker will attempt login using dictionary words, names and other things. The most common password, by far, was actually password. Among the top 5 were also fuckyou, ncc1701, various people’s names (obviously people choose their girlfriend/boyfriend/mother/father’s names for passwords), and in several dozen cases people actually used their usernames as their passwords. These days many websites will prevent you from using a weak password. So don’t be dumb. Pick a good password. It should not be dictionary-word based. Even replacing numbers with letters is easily decoded by brute-force attackers, so don’t think you’re fooling anyone. Don’t use anyone’s name in your password either. And don’t even use a combination of dictionary words, names, and l33t-sp34k numbering. The brute-force password crackers are at least as smart as you and have a lot more time and computing power.
  • So as a website operator or systems engineer what should I do? You should act immediately if you have not already. If you run your own web server, upgrade your OpenSSL package right this goddamn minute. Also, since the library is loaded in memory at service-start time you will need to restart your web server or any other service relying on the flawed library. To be safe, just reboot after you upgrade the package. There also might be code that was built statically-linked to the flawed library. In that case you’ll have to recompile and re-install it. Run common vulnerability scanners like nessus (or other tools available) against everything you have running. If you have a website that’s hosted elsewhere, contact your hosting provider immediately. Make sure they are patched and no longer vulnerable. Also, replace your SSL key and certificate. Some will say that this step is overly paranoid, and your hosting provider might even give you shit for insisting that they generate a new key and certificate for you. As I stated above, while these researchers responsibly disclosed this bug, the possibility that this was out in the wild before can not be dismissed.

    Timeline:

  • December 2011: Bug is introduced into the hearbeat function of the OpenSSL library
  • March 14th 2012: OpenSSL v1.0.1 released into the wild with the bug
  • March? 2014: Bug is discovered by some combination of Neel Mehta at Google Security and Matti Kamunen, Antti Karljalainen and Riku Hietamäki from Codenomicon and reported to the OpenSSL project.
  • >March-April 2014: NCSC-FI and OpenSSL work to notify some subset of stakeholders ahead of time of the vunerability, apparently with a patch and a workaround
  • April 7th 2014: News breaks of the vulnerability and the NCSC-FI team needs to go public with it so the rest of the world can fix their web servers

3 Comments

The 2014 MIT Mystery Hunt – Alice Shrugged


Winning the hunt in 2013

Winning the hunt in 2013


Calling the winning team in 2014

Calling the winning team in 2014

So we ran the MIT Mystery Hunt this year (our dubious award for winning it last year). The experience is pretty well bookended by the above two pictures: one of Laura answering our phone in 2013 to hear that we had answered the final meta successfully and won, and another one of Laura calling the winning team in 2014 (One fish, two fish, random fish, blue fish) to congratulate them on answering the final meta successfully. I have no idea how or where to begin describing what it was like to do this this year. My team ran the hunt in 2004, but I was out of town in Champaign, IL at the time and played no part in that little misadventure. This time around, I was on the leadership committee, in charge of hunt systems, IT and infrastructure.

Thank You

First I’d like to thank the other members of the systems team. James Clark just about single-handedly wrote a new django app and framework (based loosely on techniques and code from the 2012 codex hunt) which we will be putting up on github soonish and we hope that other teams can make use of it in the future — we have dubbed it “spoilr”. It worked remarkably well, and has several innovative features that I think will serve hunt well for years to come. Joel Miller and Alejandro Sedeño were my on-site server admin helpers and helped keep things running and further adjusted code (although only slightly) during the hunt. Josh Randall was our veteran team member on call in England (which helped because he was available during shifted hours for us). And Matt Goldstein set up our HQ call center and auto-dial app with VoIP phones provided by IS&T.

With the exception of only a few issues (which I’ll try to address below), from the systems side of things the hunt ran extremely well. We were the first hunt in a while to actually start on time, and we were also the first hunt in a while to actually have the solutions and hunt solve statistics up and posted by wrap-up on Monday. This hunt had a record number of participants, and a record number of teams, (both higher than we planned when designing and testing the system) making our job all the more difficult. And of course, I’d like to join everyone else in thanking the entire team of Alice Shrugged that made this hunt possible. It was great working with you all year and pulling off what many feel was a fantastic hunt.

Hunt Archive and Favorites

To look at the actual hunt, including all puzzles and solutions, and some team and hunt statistics, go to the 2014 Hunt Archive. My favorite puzzles (since everyone seems to be asking) were: Callooh Callay World and Crow Facts. Okay, I guess Stalk Us Maybe was pretty neat too.

Apologies

First of all, I’d like to apologize on behalf of the systems team for the issues with Walk Across Some Dungeons. This performed artificially well during our load test, but load test clients are far more well behaved than actual real people on a real network. There were several socket locking and connection starvation issues with the puzzle even after we spent all day (and night) Friday parallelizing it onto as many as 7 virtual servers. Eventually we patched the code to allow for better handling of dropped connections and to be more multi-threaded within each app instance and by late Saturday night it was working much better. The author has since ported it to javascript, and it should work fine well into the future. Lesson to future teams: don’t try to write your own socket-handling code or any server-side code for that matter that has to interact with the hunters. The issues surrounding the puzzle (lag, and the frequent reset back to stage 1 requiring many clicks to get back to your position) affected every team equally for the first day and were not fixed until at least the top 5 teams had already solved the puzzle in its “difficult” state. So luckily this was not a fairness issue, it just made a puzzle a LOT harder than it should have been.

Second, there was an interesting issue with our hunt republishing code. At times during the hunt there were errata (remarkably few, actually) and some point-value and unlocking-system changes (will mention more about this below) that required a full republish of the hunt for all teams. This is not unusual. However, with the number of teams and hunters and the pace our call handlers (particularly Zoz the queue-handling machine) were progressing teams through the hunt on Friday in particular, this created a race condition. If any puzzle unlocks happened during one of these republishes, they would be put back to the state they were when the publish started. Since a republish takes a looooooong time for all of these teams and puzzles, a number of teams noticed “disappearing” puzzles and unlocks on Friday while we were updating our first erratas in puzzles and then later Friday night when we changed the value of Wonderland train tickets to slow the hunt down a bit. We alleviated this slightly in the spoilr code by making the republish iterate team by team rather than take its state of the whole hunt at the beginning and then apply it to everyone. By later on Friday though, teams had enough puzzles unlocked that even just republishing for a team had a risk of coinciding with a puzzle unlock, so we simply froze the handling of the call-in queue while we were doing these. As a note for future teams, this could probably be fixed by making the republish work more transactional in the code.

Release Rate, Fairness, and Fun

On this subject I can not pretend to speak for the whole team (nor can anyone probably), but I will share what I experienced and what I think about it. Many medium and small-sized teams have written to congratulate us on running a hunt that was fun for them and that encouraged teams to keep hunting in some cases over 24 hours after the coin was found. On the flip side some medium and large-sized teams were a bit disappointed in the later stages of the hunt when puzzles unlocked at a slower rate (particularly once all rounds were unlocked) leaving them with less puzzles to work on and creating bottlenecks to finishing the hunt. One of the overriding principles of us writing this hunt was to make it fun for small teams, and fair for large teams. The puzzle release mechanism in the MIT round(s) was fast, furious and fun. Something like 30 teams actually solved the MIT Meta and got to go on the MIT runaround and get the “mid-hunt” reward. From the beginning of our design, the puzzle release mechanism for the wonderland rounds (particularly the outer ones) was constrained to release puzzles in an already-opened round based only on correct answers in that round. The rate of how many answers in a round it took to open up the next set of puzzles in that round, and the order in which puzzles were released in a given round was designed to require focused effort on a smaller number of opened puzzles in order to progress to a point where those metas were solvable. This rate was, incidentally, tweaked to be somewhat lower on Friday night (but only for the two rounds no team had opened yet) in a concerted effort to make sure the coin wasn’t found as early as 6-8pm on Saturday. Coming from a large team myself, I have seen the effect of the explosion of team size on the dynamics of Mystery Hunt. This is an issue that teams will face for years to come, and everyone may choose to solve it a different way. But once again, our overriding goal was to make the hunt fun for small teams, and fair for large teams, and I think we did just that.

Architecture Overview

For the curious, and to those running the hunt next year, our server setup was fairly simple. We had one backend server which ran a database and all of the queue-handling and hunt HQ parts of the spoilr software (in django via mod_wsgi and apache). There were two frontend servers which shared a common filesystem mount with the backend server so all teams saw the consistent view of unlocks. Each team gets its own login and home directory which controls their view of the hunt when the spoilr software updates symlinks or changes the HTML files there. The spoilr software on the frontends handled answer submissions and contact HQ requests among some other things, but they were mostly just static web servers. We didn’t need two for load reasons, we just had both running for redundancy in case one pooped out over the weekend. However, splitting the dynamic queue operations and Hunt HQ dashboards off from the web servers that 1500+ hunters were hitting for the hunt was a necessity. Each of the front ends also acted as a full streaming replica of the database on the backend server, and we had a failover script ready so the hunt could continue even if the backend server and database failed somehow. There was also a streaming database replica and hunt server in another colocation facility in Chicago in case somehow both datacenters that the hunt servers were in failed or lost internet connectivity. I’d like to thank Axcelx Technologies for providing us with hosting and support, and would recommend them to anyone looking for a reasonably priced virtual server provider or collocation provider.

As far as writing the hunt goes, we used the now-standard “puzzletron” software and made a lot of improvements to that and hope to get that pushed back up to gitweb for the next team to start writing with. We had dev and test instances of puzzletron running all year so we could deploy our new features quickly and safely as our team came up with neat new things to track with it. Beyond that, we set up a mediawiki wiki, and a phpbb bulletin board, as well as several mailman mailing lists and a jabber chat server (which nobody really used). As a large team, collaboration tools have always been very important for us in trying to win the hunt, and were even more important in writing it. In retrospect, we probably should have taken more time to develop an actual electronic ticketing system (or find one to use) for the run-time operations of the hunt. Instead we ended up using paper tickets which passed back and forth between characters, queue handlers, and the run-time people. Since this hunt had so many interactions and so many teams which needed to get through them, this got clumsy and some were dropped or not checked off early in the hunt (I’m very sorry if this happened to any teams and delayed unlocks of puzzles/rounds early on).

In Closing

In closing, I had a great time working on the hunt. I can’t say how great it would have been to go on it, since sadly I did not get to. But, hearing the generally positive comments from everyone thus far, I’m glad we didn’t screw it up :) The mailing list aliceshrugged@aliceshrugged.com will continue to work into the future, and I look forward to getting some of our code and documentation posted up for random to perhaps use and further improve upon next year, and for other teams to carry on the tradition for many years to come.

6 Comments