# Posts Tagged ‘sports’

## decision quality and baseball strategy

Posted by Laura McLay on December 5, 2013

Miss baseball? Love operations research and analytics? Watch Eric Bickel’s 46-minute webinar called “Play Ball! Decision Quality and Baseball Strategy” here:

## before Sabermetrics, there was football analytics

Posted by Laura McLay on November 8, 2013

I enjoyed a recent Advanced NFL Stats podcast interview with Virgil Carter [Link], a former Chicago Bears quarterback who is considered to be the “father of football analytics.” During his time in the NFL, Carter enrolled in Northwestern University’s MBA program, and he started to work on a football project that was eventually published in Operations Research in 1971 (before Bill James of baseball analytics and Sabermetrics fame!). Carter even taught statistics and mathematics at Xavier University while on the Cincinnati Bengals.

The paper in Operations Research was co-written with Robert Machol and entitled “Operations Research on Football.” The paper estimates the expected value of having a First-and-10 at different yard lines on the field (see my related post here). Slate has a nice article about Virgil Carter [Link] outlining the work that went into estimating the value associated with field position:

Carter acquired the play-by-play logs for the first half of the 1969 NFL season and started the long slog of entering data: 53 variables per play, 8,373 plays. After five or six months, Carter had produced 8,373 punch cards. By today’s computing standards, Carter’s data set was minuscule and his hardware archaic. To run the numbers, he reserved time on Northwestern’s IBM 360 mainframe. Processing a half-season query would take 15 or 20 minutes—something today’s desktop computers could do in nanoseconds. In one research project, Carter started with the subset of 2,852 first-down plays. For each play, he determined which team scored next and how many points they scored. By averaging the results, he was able to learn the “expected value” of having the ball at different spots on the field.

They found that close to a team’s own end zone (almost 100 years from scoring a touchdown), a team’s expected points was negative, meaning that turnovers from fumbles and interceptions leading an opponent to score an easy touchdown outweighed a team’s own ability to move down the field and score. The paper discusses issues other than expected values, such as Type I and Type II errors using time outs. Here, the a timeout that controls time management has implications on each team’s remaining possessions, and using too much or too little time. The rules of football were quite different 40-something years ago. For example, an incomplete pass in the endzone required the ball to be brought out to the 20 yard line (instead of a mere loss of a down with no change in field position).

Listen to the podcast here.

Read my posts on football analytics here.

Posted in Uncategorized | Tagged: , | 1 Comment »

## the craft of major league baseball scheduling – a journey from 1982 until now

Posted by Laura McLay on November 6, 2013

Grantland and ESPN has a short video [12:25] on the couple who created the major league baseball schedules in the pre-Mike Trick era (1982-2004). The husband-and-wife team of Henry and Holly Stephenson used scheduling algorithms to set about 80% of the schedule. They found that the their algorithm could not come up with the entire schedule because the list of scheduling requirements led to infeasibility:

“It couldn’t do the whole schedule. That was where the big companies were falling apart. We analyzed the old schedules and found that none of them met the written requirements that the league gave to us. It turns out it was impossible to meet all of the requirements. So the secret was to really know how to break the rules.”

Watch the video here. The end of the video acknowledges how scheduling has evolved such that the entire schedules can be computer generated using combinatorial optimization software (the Stephensons even mention having to compete with a scheduling team from CMU). The video uses baseball scheduling as an avenue to illustrate how decision making and optimization has evolved in the past 30 years. I would highly recommend the video to operations research and optimization students.

## why the Bears should have gone for it on fourth and inches

Posted by Laura McLay on November 5, 2013

In last night’s Bears/Packers game, Coach Marc Trestman (of the Bears) decided to go for it on 4th and inches at the Bears’ 32 yard line during in the fourth quarter with 7:50 left and when the Bears were up 4. Normally, teams decide to punt in this situation, which reflects a hyper-conservative decision-making approach adopted by most football coaches. The Bears got the first down, and the ensuing drive led to a field goal, putting the Bears up by 7 with 0:50 left in the game.

In hindsight, it was obviously a great call. But decisions aren’t made with hindsight – both good and bad outcomes are possible with different likelihoods.

An article by Chris Chase at USA Today [Link] argued that while going for it on 4th down was a bad decision because the bad outweighed the good. There isn’t much analytical reasoning in the article. I prefer base decisions on number crunching rather than feeling and intuition, so here is my attempt to argue that going for it on 4th down was a good decision.

### The basic idea of football decision-making

There are a number of models that estimate the expected number of points a team would get based on their position on the field. To determine the best decision, you can:

1. look at the set of possible outcomes associated with each decision,
2. find the probability and expected number of points associated with each of these outcomes,
3. then take the expected value associated with each outcome, and
4. choose the outcome with the most expected points.

Let’s say going for it on 4th down has success probability p. Historical data suggests that p=0.8 or so. If unsuccessful, the Packers would take the ball over on the Bears’ 32 yard line with a conditional expected value of about -3.265 points. This value is negative because we are taking the Bears’ point of view. If successful, the Bears would be around their own 35 yard line with a conditional expected value of 0.839. When considering both outcomes (success and failure), we can an expected value associated with going for it on fourth down: 0.839 p – 3.265(1-p).

Let’s look at the alternative: punting. The average punt nets a team about 39 yards. This would put the ball on the Packers’ 29 yard line with an associated expected number of points of -0.51. However, this isn’t the right way to approach the problem. Since the expected number of points associated with a yard line is non-linear, we can’t average the field position first and then look up the expected number of points. Instead, we should consider several outcomes associated with field positions: Let’s assume that the Packers will get the ball back on their own 15, 25, 35, and 45 yard lines with probabilities 0.28, 0.25, 0.25, and 0.22 and with expected points 0.64, -0.24, -0.92, and -1.54, respectively. This averages out to the ball on the Packers’ 29 yard line with -0.45 points (on average).

Now we can compare the options of going for it (left hand side) and punting (right hand side):
$0.839 p - 3.265 (1-p) \ge -0.45$
Solving this inequality tells us that the Bears should go for it on fourth down if they have a success probability of at least 68.6%.

These values are from Wayne Winston’s book Mathletics.

### But time was running out!

The method I outlined above tends to work really well except that it ignores the actual point differential between the teams (which is often important, e.g., when deciding to go for one or two after a touchdown), the amount of time left on the clock, and the number of timeouts. It’s worth doing a different analysis during extreme situations. With 7:50 left on the clock, the situation wasn’t too extreme, but the Packers’ 3 remaining timeouts and 4 point score differential are worth discussing. Going for it on 4th down allowed the Bears to score a field goal and eat up an additional seven minutes off the clock, which was almost the perfect outcome. Let’s consider a range of outcomes.

Very close to the end of the game, it’s best to evaluate decisions based on the probability of winning instead of the expected number of points. Note that you find the probability of winning as the expected value of an indicator variable, so it uses the same method with different numbers. Making this distinction is important, since if you are down by 4 points, going for a field goal may maximize your average points but would guarantee that you’d lose the game.

One way to address these issues is to look at how many possessions the Packers will have if the Bears punt or go for it on fourth down. Let’s say that the Packers would get one possessions if the Bears punt. They would need to score a touchdown on their single possession to win. Let’s say that the Packers would get two possessions if the Bears punt. The Packers could win by scoring two field goals or one touchdown, unless the Bears score on their possession in between the Packers’ possessions. If the Bears score an additional field goal, that would put the Bears up 7, and the Packers would need at least one touchdown to tie (assuming a PAT), and an additional score of any kind to win. If the Bears score an additional touchdown, that would put the Bears up 10-12, and the Packers need two touchdowns to win and could possibly tie or win with a field goal and a touchdown (assuming a PAT or 2-point conversion was successful). The combination and sequence of events need to be evaluated and measured.

Without crunching numbers, we can see that punting would likely increase the Packers’ chance of winning because it would give them 2 chances to score (unless the Packers’ defense is so poor that they think the Bears would be almost certain to score again given another chance).

This is just one idea for analyzing the decision of whether to go for it on fourth down. Certainly, more details can be taken into account so long as there is data to support the modeling approach to support the decision.

Brian Burke blogged about this as I was finishing up my post [Link]. He used the win potential instead of the expected number of points (which I recommend but don’t calculate). This yielded the Bears’ break-even success probability of 71%, which is close to what I found. In any case, this more or less supported the decision to go for it on fourth and inches (although not going for it would also be reasonable in this case since the probability of successfully getting a fourth down is only slightly higher than the threshold) but maybe this analysis wouldn’t have supported the decision to go for it if it were fourth and 1.

### More on fourth down decision-making:

What sports play have you over analyzed?

Posted in Uncategorized | Tagged: , | 3 Comments »

## methodologies used to predict the outcome of the basketball tournament

Posted by Laura McLay on March 21, 2013

My last post was about how to choose a winning bracket in the NCAA men’s basketball tournament. I linked to several tools for predicting which team is likely to win the outcome of a game. These tools

1. provide a rank ordering of the teams from best to worst,
2. compute the odds of which team would win in a matchup based on their tournament seed, or
3. provide odds of a team making it to different levels of the tournament based on specific matchups.

I linked to the methodologies used by these tools in my last post but didn’t get into the details. Here, I am going to discuss the methodologies in more detail. I am going to focus on tools that predict the outcome of specific tournaments (#3 above).

Wayne Winston noted in Mathletics that there is no transitivity in matchups. That is, if team A is favored to beat team B and team B is favored to beat team C, this does not  imply that team A is favored to beat team C. Thus, the team rankings (#1 above) are not a perfect tool for predicting specific matchups. He uses “power ratings” to compute how many points one team is better than the other (a point spread), which takes home field advantage and other factors into account. He then converts the point spread to the probability of winning using historical game outcomes (basically, a normal distribution with a history-derived standard deviation) or simulates the games to compute the odds of winning.

Nate Silver’s model is interesting in that it takes many inputs, including the ranking tool outcomes from #1 above. His model uses blends four ranking models to take a more pluralistic view of who might win. I think this is a strength because it uses the wisdom of crowds (a small crowd in this case). Each of the four tools contributes 1/6 of the total power rating (a margin of victory).  Seed number and whether the team was ranked in preseason polls each contribute 1/6 of the power rating. He then makes adjustments for the geography of the game and player injuries and absences. He doesn’t describe his forecast probabilities in detail, but I suspect that his approach is similar to Wayne Winston’s. A team’s power rating is adjusted in each round based on the outcomes from previous rounds to account for potential errors in the power rating, another strength of the model.

Finally, Luke Winn and John Ezekowitz’s model doesn’t use power ratings [methodology here] – it instead applied survival analysis to predict when a team may drop out of the tournament. This model computes hazard rates for each team based on the team’s RPI and Ken Pomeroy’s ranking. They also consider

1. consistency,
2. tournament experience,
3. out-degree network centrality that captures the number of games played and won against other NCAA tournament teams (see picture below), and
4. the negative interaction of the Experience and Out-Degree Centrality variables

Cox Proportional Hazard regression was used to rerank the teams.

Posted in Uncategorized | Tagged: , | 1 Comment »

## Superbowl reading for number crunchers

Posted by Laura McLay on February 1, 2013

Here are a few links to posts and articles about the Superbowl that will appeal to number crunchers:

Nate Silver argues that defense wins championships. Many math models show that offense is more instrumental in winning games than is defense. But defense may be better for winning titles. Silver looks at the top 20 defenses and offenses to have played in the Super Bowl according to the simple rating system at pro-football-reference.com. He finds that the team with top defensive teams have won 14 of 20 Super Bowls  whereas the top offensive teams have won 10 of 20.

Nate Cohn at the New Republic writes about how football is ripe for reaping the benefits from advanced statistics.

Josh Laurito has a nice post on TV Ratings (as measured by Nielsen) for major league sports championships. The Super Bowl is the only one championship that has been increasing over the past decade or so  (shown below). The Superbowl with the highest ratings ever was the 1986 Superbowl featuring the 1985 Bears (this is probably the closest I’ll get to proving that the 1985 Bears was the best team ever.)

Superbowl Nielson Ratings

I’ve written about football in several posts. One analyzes the Patriots’ decision to let the Giants score a touchdown in last year’s Superbowl using a decision tree.

I also have three presentations on football decision making.

The third uses game theory to find the best mix of run and pass plays.

Posted in Uncategorized | Tagged: , | 1 Comment »

## game theory and college football

Posted by Laura McLay on November 19, 2012

60 Minutes had a nice piece on college football on Sunday with correspondent Armen Keteyian (Link). The story examined the popularity and skyrocketing costs of college football programs. The most interesting part of the story was its application of game theory. To stay competitive, a team must recruit a good coach and the best players. Of course, a team’s competitors are going to be doing the same. This leads to a type of Prisoner’s Dilemma  where a team can choose to “keep costs down” or “escalate.” If the team keeps costs down and their opponents escalate, they have a terrible record, their alums are not happy, and their alums are not generous with donations. This leads to a college football arms race:

[Michigan athletic director] Dave Brandon: You’ve got 125 of these programs. Out of 125, 22 of them were cash flow even or cash flow positive. Now, thankfully, we’re one of those. What that means is you’ve got a model that’s not sustainable in most cases. You just don’t have enough revenues to support the costs. And the costs continue to go up.

Why? A big reason is universities are in the midst of a sports building binge. Cal Berkeley, for example, renovated its stadium to the tune of $321 million. The list is endless. Michigan’s athletic department floated$226 million in bonds to upgrade the Big House.

[60 Minutes correspondent] Armen Keteyian: What are you chasing?

Dave Brandon: We want to win championships.

Armen Keteyian: And you’re going to get a big payout?

Dave Brandon: We’re going to have excited fans, we’re going to fill stadiums, we’re going to be on TV. We’re going to accomplish all of the goals that we need to accomplish to keep this department moving ahead.

Armen Keteyian: And that’s where the phrase “arms race” comes up?

Dave Brandon: If you don’t keep pace, if you don’t stay competitive, you’re going to have a problem.

Inside a recently built indoor practice facility that many an NFL team would envy, we spoke to Michigan’s head coach Brady Hoke.

Armen Keteyian: Can you recruit a top player without facilities like this?

[Michigan's head coach] Brady Hoke: You know, it matters. I– I’d be sitting here lying if I didn’t think it mattered. I think the other part of it though– the people have to matter too.

The program every school has been chasing is Alabama. The Crimson Tide have rolled to two national titles in the last three years. The architect of that success is Nick Saban, as innovative a coach as there is in the game. And the leader of another escalating trend in college football: skyrocketing coaching salaries. Saban is paid over \$5 million a year, more than Alabama’s chancellor.

Armen Keteyian: Are you worth it?

Nick Saban: Probably not. Probably not.

Universities  engage in other arms races. The move toward the university as Club Med is an example. The university with the best dorms and exercise facilities recruit the best scholars. A university without an artificial rock climbing wall, water slides, and spa (sadly) cannot hope to be competitive.

Is there a way universities can deescalate?

Posted in Uncategorized | Tagged: , | 3 Comments »

## why Lance Armstrong’s stripped Tour de France titles should not be given to other cyclists

Posted by Laura McLay on August 28, 2012

Lance Armstrong will be stripped of his seven Tour de France titles. Watching him win those races was exciting, and I had always hoped that the investigations regarding the doping allegations would come to a conclusion. Certainly, the allegations are still alleged, as there has never been a positive test and Lance has not confessed (in fact, he has merely refused to fight the allegations). But I digress.

There has been some discussion on how to award his Tour de France titles to other cyclists (more here). Here are my thoughts that are most definitely influenced by operations research.

1. An athlete’s strategy depends on the strategies of their opponents. If the doping cyclists did not compete, it would have been an entirely different race. The fastest non-doping cyclist would not necessarily win a race with only non-doping cyclists.   Therefore, without Lance as an opponent, perhaps the sixth place finisher could have won with a more conservative cycling strategy. Lance won his races with different margins, and he had to come from behind in several of the Tours.  However, many of the leading cyclists in the Tours were tainted with doping, so I suspect that widespread doping affected the non-doping cyclists’ strategies.

In diving, for example, one may change the selection of dives based on what they anticipate their opponents’ scores would be (see my blog post here).  A diver may go big or go home, having a poor finish in an attempt to medal against super-human opponent. Without the superhuman opponent, they might make a more conservative strategy that would make a second place finish more likely.

2. Cycling is doubly challenging because cyclists are on a team, yet there is an individual winner. The team members (as part of the peloton) shield their team leader from wind, etc. That is, all of the teammates are sacrificed for the one member to have a chance at winning. What if the team leader is not doping but his teammates dope? That cyclist would have received an unfair advantage even if he did not personally engage in doping. This is a gray area.

3. In other competitions, second place athletes have refused a title/win after the first place athlete was stripped of their title. Reggie Bush being stripped of his Heisman trophy comes to mind. Vince Young, the second place athlete, was not offered the Heisman and publicly stated that he would not have accepted it. This is notable, since Vince Young almost certainly would not have competed any differently had Reggie Bush not been in the Heisman competition (concern #1), and therefore, Vince Young would have won if Reggie Bush had not competed. Yet Heisman Trust simply decided to vacate the award for 2005. In cycling, where the second place winner would not necessarily have won in a race without the alleged dopers, there is even more reason to vacate the Lance’s titles from 1999-2005.

4. My above arguments assume that we know the truth about who has doped and who hasn’t. This is a dubious assumption.  If you are curious about unpacking the mystery, I recommend watching the 60 Minutes interview of Tyler Hamilton (one of Lance Armstrong’s teammates and accusers makes a compelling case against Lance) and Sally Jenkins’ latest Washington post article. Sally Jenkins does an excellent job of explaining why the alleged doping is just that: alleged.  Alberto Contador was banned for two years after a substance was in his blood that was “too small to have been performance-enhancing and that its ingestion was almost certainly unintentional.” He was found guilty because “There is no reason to exonerate the athlete so the ban is two years.” Making decisions under uncertainly–naming new Tour winners from 1999-2005–is fraught with peril. There is no physical evidence of some of the accused, and there is so much that we do not know about who is innocent and guilty of their charges. Even if there was some reasonable way to address my concerns #1 and #2 about strategy in the presence of some dopers, I cannot imagine the newly named winners would be deserving.

Related blog posts:

Posted in Uncategorized | Tagged: | 10 Comments »

## how to maximize the probability of getting a medal in diving

Posted by Laura McLay on August 2, 2012

In some Olympic sports, such as diving, the athlete receives scores based on several trials. In diving, each trial is a separate dive. A diver’s score is the sum of the different dive scores in different trials. What is the best way to maximize the chance of getting a medal?

• Women must complete five dives.
• There is no limit on the total degree of difficulty for these dives.
• At least one dive during the contest must come from each of five different categories – forward, back, reverse, inward, and twisting.
• No dive can be repeated in a list of dives.
• Divers must select dives ahead of time and cannot change the order

• Men must complete six dives.
• There is no limit on total degree of difficulty for these dives.
• For the men, at least one dive during the contest must come from each of six different categories – forward, back, reverse, inward, twisting and armstand.
• No category can be repeated in a list of dives.
• All dives must be competed from the 10-meter platform.
• Divers must select dives ahead of time and cannot change the order

Let’s make three assumptions:

1. Divers can select their dives and their order on the fly (clearly not true, but makes it more interesting).
2. A diver’s performance on each dive is independent of his/her performance in other dives.
3. From experience, divers know the distribution of points based on each of their dives.

Model 1: Divers know the “threshold” for medaling ahead of time (dubious!)

Let’s solve this problem using a Markov decision process, where the stage here is the dive (1-5 for women or 1-6 for men).

Let

Vt(S(t)) = value of being in state S(t), where S(t) = the total number of points.

and let

M = point threshold for medaling.

The rewards are Rt(S(t-1),P) = 1 if the diver moves from state S(t-1)<M to S(t)>=M (i.e., the diver moves into medal contention if the P points from the dive at time t-1 moves them above threshold M). All other rewards are zero. The diver wants to maximize the probability of medaling, which is equivalent to maximizing the total value (V1(0)) or the expected value of the 0-1 indicator variable indicating whether the diver medals or not.

Let the set of dives in each category be captured by D1, D2,…,D5 for women or D1, D2,…,D6 for men. For each d in Di, there are known probability distributions for the point totals P.

Now the Bellman equations are

Vt(S(t)) = max_{d in Dt} E{Rt(S(t),P) + V[t+1](S(t)+P)}

Here, the expectation is taken over the points distribution for dive d in Dt. The probability of medaling is found by V1(0), the value before dive 1 starting with 0 points. The boundary conditions are V[T+1](S(T+1)) = 0 for all values of S(T+1) since rewards are accumulated once.

The optimal policy indicates what dive should be chosen in each trial (MDP stage) based on the total number of points that have been accumulated thus far. If a diver is successful with tough dives early on, the diver can choose easier dives later on (and vice versa).

Model 2: Divers do not know the “threshold” for medaling ahead of time so they maximize the total number of points

The model dynamics here are identical to those of the model above except for the rewards. The random rewards Rt(S(t-1),P) = P(d,t) if the diver gets P points for their dive d. The rest is the same yielding Bellman equations of

Vt(S(t)) = max_{d in Dt} E{P(d,t) + V[t+1](S(t)+P(d))}

They look the same as above, but the rewards in boldface are different. The expected number of points is captured by V1(0) and the boundary conditions are V[T+1](S(T+1)) = 0 for all values of S(T+1) as before.

Here, the diver doesn’t really need to solve an MDP. He or she can simply select the dive that yields the most points (on average) in each category, since the choices will not depend on the number of points accumulated thus far. Let EP(d,t) denote the average points from dive d. Then, the policies depend on the stage t rather than the full state variable S(t), yielding Bellman equations of

V[t] = max_{d in Dt} (EP(d,t) + V[t+1]).

The expected number of points is captured by V(1)  and the boundary condition is V[T+1] = 0. Note that we lose the expectation here. The optimal solution isn’t rocket science: it is to select the dive with the largest EP(d,t) for each t.

Why these models are different

On face value, it may not be obvious why these models could yield different solutions. They are both used to identify dives that yield many points. The second model maximizes the expected number of points whereas the first model maximizes the point total distribution above a threshold (which can be thought of as moving as much of the tail of the distribution past a fixed threshold M). The first model could lead to “riskier” dive strategies for a diver who is not a favorite to win: the diver has a chance of being on the podium but could also go down in flames. For the gold medal favorite, the first strategy might lead to a conservative strategy that weeds out all of the dives that have a chance of a disastrous result.  The second model leads to more conservative dive selections that yield more points (on average) but that might yield the most points but would almost certainly not lead to a medal.

Lift the first assumption: divers must make their selections ahead of time

If we lift the first assumption, the choice of dives cannot depend on the point total thus far to identify the best set of dives regardless of what happens with the earlier dives.

The answer to the second model is still obvious. The optimal solution is never change dives on the fly, even when one has that choice.

The first model, however, can be examined by looking at the joint probability distribution in the points that would be expected.

Let the decision variables be captured by

x(d,t) = 1 if dive d is selected in stage t and 0 otherwise.

Let P(d,t)

Then our stochastic optimization model is

max Pr( P(1,1)x(1,1) + P(1,2)x(1,2)+…+P(|D1|,1)x(|D1|,1)+…+P(|D5|,5)(|D5|,5) > M )

subject to x(1,t)+x(2,t)+…+x(|Dt|,t) = 1 for all t=1,2,3,4,5

x(d,t) \in {0,1} for t=1,2,…,|Dt|, t=1,2,3,4,5.

I’ll stop here because I’ve been up late watching the Olympics. I’ll leave the solution as an exercise for the reader. Leave feedback and corrections in the comments.

If you’ve ever dove from 10m, you rule. I jumped from 7m a few times and am unwilling to go any higher.

Posted in Uncategorized | Tagged: , | 5 Comments »

## why the Patriot’s decision to let the Giants score a touch down makes sense

Posted by Laura McLay on February 6, 2012

I was shocked to see the Patriots allow the Giants to score a touchdown (TD) in the Superbowl with 57 seconds left. Here is why it makes sense.

From my earlier post on going for two points after a touchdown, we can model this situation as a one possession game. That is, the Patriots would get the ball back and finish the game with it. As I recall, the Patriots had no time outs. The key issue was how much time would they have? It’s easier to score with 57 seconds left than with 15.

First, let’s consider that the Patriots allow the Giants to score a touchdown. Now they have 57 seconds left on the clock. Historical data suggests that there was a 19% chance of scoring a touchdown. Yes, the Patriots had Tom Brady, but time was a factor. Allowing the Giants to score a TD would give the Patriots a 19% chance of winning.

Now, let’s see if not allowing the Giants to score would have improved the Patriots’ chances. Let’s go back to the Giants’ touchdown scoring drive. Before they took the lead, there were three outcomes:

1. Giants score a touchdown. If the Patriots didn’t make it so easy, let’s say that there would be 15 seconds left on the clock. Let’s conservatively give this a 50% chance. Regardless if the Giants went for two, the Patriots would need a touchdown.
2. Eli Manning would throw a interception. He did this in 4.4% of his passes, but surely he was considering more conservative passes here. Let’s say there was a 3% interception chance. The Patriots would win outright.
3. Giants score a field goal. They would have a 47% chance of a field goal attempt. Wayne Winston’s Mathletics suggests a 99% success rate. The Patriots would need a field goal to win.

Let’s say that that the above options would have left 15 seconds left on the clock. Let’s look at the outcomes from the perspective of the Patriots.

1. Giants score a TD. Let’s say that the odds that the Patriots would score a touchdown decreased from 0.19 to 0.10 with less time on the clock. Again, I think this is generous, even with Tom Brady.
2. Giants interception. Patriots win with virtual certainty.
3. Giants go for a FG. The Patriots could get into field goal range with a probability 0f, say, 0.2. A distant field goal would succeed with a probability of 0.5.

Putting this all together, the odds that the Patriots would win in the second option are:
0.5(0.1)+0.03+0.47[(0.01)+0.99(0.2)(0.5)] = 0.13

(the first component is from a Giants TD, the second component is from a Giants interception/fumble and the third component is from a Giants FG attempt and then miss + success)

Not allowing the Giants to score a TD would give the Patriots a 13% chance of winning.

[2/6 update at 2pm] Three things:
1. I was told that the Patriots had one time out left. I don’t think that would drastically affect anything here.
2. The Giants could have run out the clock while scoring if the Patriots did not allow them to score. That, of course, would have given the Patriots an even smaller chance of winning (down from 13%). Again, that would not have changed the Patriots’ strategy.
3. It looks like Coach Coughlin of the Giants knew that Coach Belichick of the Patriots would allow the Giants to score so easily and then told his players not to score so quickly. This was a quick decision conveyed to all players on the fly. This type of game strategy suggests that sports analytics matter.

I think the Patriots were right. What about you?

Posted in Uncategorized | Tagged: | 5 Comments »