Engineering a message in Vegas

LAS VEGAS – Last week, ESPN was part of a big production at an event with an audience of thousands. With a huge center stage framed by oversized video screens in the background, the applause was loud, the music was louder, and the crowd was entertained by an array of world class performers.

This sounds like nothing new for ESPN, except this event had almost nothing to do with sports. Most in the audience knew more about Thomas Watson than Bubba Watson. This was not a boxing match, or the ESPYs, or even the NBA Summer League.

It was the IBM Impact Global Conference, an annual gathering of technology business leaders and self-admitted computer geeks who came to discuss and learn about the latest advancements in business and technology.

One of the conference’s keynote speakers was Manny Pelarinos, Director of Engineering for ESPN Digital Media, who leads the engineering team behind ESPN.com. Manny was there to explain to everyone how ESPN’s mission to serve sports fans was the driving force behind a redesign of ESPN.com and its focus on personalization.

Manny described several challenges, not the least of which is the fact that at any given time ESPN.com must be ready to serve about 10 million fans creating more than 10,000 requests per second. That’s 10,000 requests every second of every day, 24/7.

So at some point, without really doing the math, we’re talking billions and billions of requests from fans on ESPN.com seeking a personalized experience. Pretty mind-boggling.

And the solution to delivering on all these requests is equally so. Here’s how Manny describes it:

“When we started the personalization effort for ESPN.com, our biggest challenges were scalability and performance. Personalized content is unique for each sports fan, and with so many requests every minute of the day, traditional web page caching starts to break down quickly.

“So we designed the GRID — a massive memory store of all our users’ favorites. Their favorite sports, teams, players, etc. The GRID consists of many servers with hundreds of Java Virtual Machines that run in tandem to achieve super fast reads and writes, as well as hundreds of gigabytes of available memory.

“The system is completely fault tolerant, which means that if any server goes offline, another server auto-magically comes online to replace it.

Best of all, we can add or remove servers to scale on demand at runtime. If we need another 10 servers, we just spin them up, and we get more capacity.”

Again, that boggles the mind, especially for those of us without computer engineering degrees. But the thousands in the crowd at the Impact Conference understood and applauded.

Last week, ESPN was recognized not only as the worldwide leader in sports, but a world class leader in digital engineering, as well.

  • Joseph

    Great use of IBM’s WebSphere eXtreme Scale!