Monthly Archives: March 2010

Miscellaneous links

Here are a few stories and blog posts that I read in the past week:


depression as a system

A recent NY Times Magazine article about depression got me thinking.  Here is an excerpt:

The mystery of depression is not that it exists — the mind, like the flesh, is prone to malfunction. Instead, the paradox of depression has long been its prevalence. While most mental illnesses are extremely rare — schizophrenia, for example, is seen in less than 1 percent of the population — depression is everywhere, as inescapable as the common cold…

The persistence of this affliction — and the fact that it seemed to be heritable — posed a serious challenge to Darwin’s new evolutionary theory. If depression was a disorder, then evolution had made a tragic mistake, allowing an illness that impedes reproduction — it leads people to stop having sex and consider suicide — to spread throughout the population. For some unknown reason, the modern human mind is tilted toward sadness and, as we’ve now come to think, needs drugs to rescue itself.

The alternative, of course, is that depression has a secret purpose and our medical interventions are making a bad situation even worse. Like a fever that helps the immune system fight off infection — increased body temperature sends white blood cells into overdrive — depression might be an unpleasant yet adaptive response to affliction. Maybe Darwin was right. We suffer — we suffer terribly — but we don’t suffer in vain.

This isn’t OR, but it piques my interest in systems modeling and in making sure our paradigm is aligned correctly.

I recently attended a seminar by Dr. Paul Andrews at VCU about approaching depression as an evolved adaptation for analyzing complex social problems.  This seminar was part of the Science Technology and Society program here at VCU.  Dr. Andrews has a background in biology, law, and engineering, and summarized some counterintuitive depression findings that were echoed in the NY Times magazine.  His eclectic education and experience made for an excellent talk that touched on many aspects of depression.

Depression is complex and has many costs. Finding good ways to prevent and treat depression has been elusive.  In his talk, Dr. Andrews examined how to approach depression by using the following paradigm. He argues that when we carve up nature the wrong way, we see disorder.  We shouldn’t conclude that nature is disordered, since maybe we are looking at it the wrong way.  When we realign our view, we can see order. Disorder hypotheses don’t explain analytical processing styles and why some of the treatments work or do not work.

Paradigms for depression are not able to accurately describe aspects of depression or predict which treatments actually work.  Some research has suggested this counterintuitive finding:  disrupting depression through cognitive techniques (temporarily distracting the depressed patients) actually makes depression worse, not better.  Writing therapy sometimes works when depressed patients routinely think about their depression.  When it does, depression temporarily spikes during recovery, eventually giving way to recovery.  This occurs when the depressed patient reflects about why they are depressed, resulting in a temporary relapse.  Since this occurs due to the patient gaining understanding and insight, they can use these tools to work through the relapse and to head toward recovery.

Depression paradigms also have trouble explaining relapse rates.  Relapse rates are 23-35% when taking a placebo.  Relapse rates are much higher (76% in one study) when SSRIs are discontinued, and they are higher the longer the patient takes meds.  This is troubling.

If depression is a disorder, why is it so common?  Clinical criteria for diagnosing depression find that 30-50% of people will be clinically depressed at least one during their lifetimes.  Is there something wrong with how we are defining depression?  Maybe it has some competetive advantage (as suggested in the NY Times magazine)? I have often wondered about this, and was glad that Andrews addressed this topic during the talk.

Side note: I hypothesize that depression might be like crime.  Depression, like being arrested, is something that many people experience over their lifetimes, largely because we are looking at a cumulative effect.  As a result, there is a large proportion of people who will be depressed—or arrested—at least once over a lifetime.

Certainly, many applications of OR benefited from a better alignment.  One example is elevator waiting times (which I first heard about from one of Dick Larson’s talks). Rather than reducing elevator waiting times, mirrors were put up to reduce waiting anxiety. Complaints plummeted while the slow elevators continued to make people wait.

Another example is on the psychology of queues.  To deal with airport passenger complaints about inequities in getting their checked baggage after their flights, extra distance was added in the walking time from disembarkation point to the baggage carousel.  Everyone had to walk further but complaints reduced:  the equitable/inefficient solution worked!

Have you realigned an OR problem?


NCAA roundup

A few articles about the NCAA tournament using math were in the news.

Depaul math professor Jeffrey Bergen illustrates how hard it is to fill out a correct bracket using straight up combinatorics.  You are less likely to randomly choose winners in a bracket (ignoring seeds) than to win the lottery.  This is because there are 63 games in the tournament (with two potential winners in the first round and 65 potential winners in the final round), whereas there are only 6 numbers in the lottery  (with about 40 numbers to choose from, with replacement). Of course, you increase your odds of correctly predicting all the tournament games by taking the seeds into account, but it’s still tough.  The winning brackets in online contests from a field of millions of entries typically do not predict all games correctly.

An article in the business section of CBS News summarizes some hints that rely on mathematical tools (rather than listening to the talking heads).  They suggest using online tools, including the OR model LRMC (developed by Joel Sokol and others). The article also suggests playing the odds in the first round and choosing all #1 seeds to advance.  They also suggest to play some mind games if you are filling out an office pool, since you can increase your odds of winning by make different–but not unlikely–choices.  They recommend choosing the third or fourth overall pick as the champion rather than the first or second overall pick that most of your rivals are choosing.  It also advises to guard against the bias of availability by not favoring teams that have played against the home town favorite.

ESPN maintains a list of Giant Killers for predicting upsets, which is mainly useful in the early rounds of the tournament.  A giant killer is a “team that beats a tournament opponent seeded at least five spots higher in any round”.  ESPN has a methodology behind their approach–they have

zeroed in on team stats that correlate strongly with upset wins and losses in past tournaments. We’ve conducted multiple regression analyses, which essentially is a way to tell how strongly each member of a group of inputs (those stats) affects an output (giant-killing success or failure). Statistically, [Giant Killers] have:
• Low turnover rates and high rates of generating opponent turnovers.
• High offensive-rebound percentages.
• High 3-point scoring as a proportion of all points scored.

Links:


Predictalot for bracket success

Yahoo! labs has unveiled Predictalot for combining user rules that don’t explicitly rely on math for predicting tournament winners, although there is a lot of complexity in the game. Predictalot is a #P-Hard game that uses combinatorial prediction market methodologies for combining human input and high-performance computing for making better tournament predictions.

Predictalot limits users to a pretty restricted set of rules. The rules focus on more aggregate or simple outcomes that are easy to count (like the sum of the seeds in a given round, rather than the mix of individual seeds).  This isn’t too bad for beta version 1.0, but I still found it frustrating.  For example, when predicting which seed range advances to what round, I am unable to create a rule that indicates that a single five seed or worse will make it to the final four.  I can only create a rule about all final four team seeds.  I also wanted to create a rule about how many Final Four teams would be from the Big Ten conference.  The only two conference rules allowed: (1) predicting the winner and (2) predicting if a conference will have more or fewer wins than another conference.

It is also not clear that “better than a 4 seed” is strictly-better-than or better-than-or-equal-to.  Accuracy is important to me. For example, I was unable to create a rule that a one seed would win the tournament (see image below).  This is something that can easily be fixed.

Still, Predictalot looks pretty good for a beta version, and it will be interesting to see how it works, both in terms of predicting a winner and harnessing the power of social networking.

Links:


March madness podcasts

This is an update from yesterday’s post about picking a good bracket.

I discovered that Sheldon Jacobson was featured on two radio shows this morning that are available on podcasts:

Any other podcasts about the NCAA tournament featuring OR?  Let me know!


how to pick a winning bracket

It’s that time of year again to be swept away by college basketball for awhile.  I am looking forward to the tournament this weekend, a little less than before I found out that my alma mater didn’t make the cut.  Maybe that will give me the opportunity to be more objective and to make a decent bracket.  I am going to summarize two methods for making bracket picks.

The Chicago Tribune wrote a story about one of Sheldon Jacobson’s papers about the tournament that was published last year.  Jacobson and graduate student Douglas King performed a statistical analysis of how the seeds performed over 25 years of tournament data.  The results indicate that seeds are important for the first three rounds but are meaningless beyond the Elite Eight.

From the Elite Eight on, chance is as much a determinant as seeding. After the first two rounds, where seeds No. 1, 2 and 3 dominate, the seeding system falls apart, according to a study he conducted on the 25 years of NCAA tournaments since it expanded to 64 teams in 1985.

“Whoever’s in the Elite Eight, you can flip a coin,” said Jacobson, whose field is operations research and probability. “You think, ‘If a 1 is playing a 7, should we do that?’ Statistically speaking, you can. As you go further in the tournament, the seeds erode even more.”

I wrote about this article last year, and it’s nice that it has made a lasting impression.  You can hear Sheldon Jacobson discuss his research on two Chicago radio stations.

I’ve also written about LRMC (the method developed by Joel Sokol and Paul Kvam and improved by George Nemhauser and Mark Brown).  LRMC is a method to predict the winner in each game of the tournament.  Since there are three versions of LRMC (LRMC Pure, Bayesian LRMC and LRMC(0)), it can be used to make three different brackets.  Last year, LRMC(0) finished in the 97.8th percentile in ESPN’s tournament challenge.

Between the LRMC methods and selectively flipping a coin in the later rounds in the tournament, you should be able make a few good brackets, statistically speaking.

How do you make your tournament picks?  Who do you think will win the national championship this year?  Isn’t March Madness infinitely better than the college football bowl system?

Links:

Related posts:


social networking in disaster optimization

I attended the Health and Humanitarian Logistics conference last week.  The conference was great, and I was pleasantly surprised that most of the speakers and attendees were from NGOs, government, and private industry. This provided a great opportunity for practical, interdisciplinary discussions.

Although there were many excellent talks, I am going to highlight one particular talk from the conference that addressed the use of social media in humanitarian response.

Mark Keim, MD, from the CDC talked about how social networking changed the response to the Haiti earthquake compared to earlier disasters.  Compared to traditional, hierarchical networks, the peer to peer network is individual (instead of organized), public (as opposed to institutional), immediate (instead of delayed), dynamic (instead of static), and much more adaptive and scalable (compared toa hierarchical network).  Peer to peer networks offer advantages as well as challenges during a disaster.

Keim  summarized some of the differences he noticed during the Haiti response:

There were several million tweets about Haiti within 48 hours of the earthquake.  They started within minutes.  This provided immediate information, which was crucial since it took the response teams much longer to get to Haiti (we didn’t have to wait days to hear updates from Haiti).

The Red Cross raised $25M via text message (this article claims $35M within 48 hours).  The instructions were spread throughout FaceBook, twitter, and other social networking tools.

Blogs, twitter, and youtube increased Haiti coverage when coverage decreased on news networks and sites (about 11 days after the disaster), illustrated by Jan 23 on the google trends figure below.  This illustrated a metaphorical hand off from the news sites to individuals after the disaster.

Google trend search volume as a function of time, January 2010

Google trend search volume as a function of time, January 2010

During the disaster, Keim’s team noticed that a map of the shelters was posted online by a presumably unknown user (this is an example, but not the one in the talk).  It was wonderful that someone adapted to provide this information, but could it be trusted to allocate resources?  Social networking provides more information (and the information that is typically needed in the moment), but credibility remains a large issue.  However, one attendant noted that the so-called credible information (e.g., from government agencies) is often biased and inaccurate, which isn’t really a better alternative.

I know that I blogged about Haiti.  How did you use social media in the aftermath of the Haiti earthquake?


Follow

Get every new post delivered to your Inbox.

Join 2,427 other followers