I am teaching an undergraduate simulation course in the spring to industrial engineering students. I have plenty of lecture notes from colleagues, but I am missing simple classroom demos (like an Excel spreadsheet), classroom modeling activities, and case studies. This is a bleg for additional material to enhance my teaching. Please email me with any material or post a link. Thank you in advance!

# Tag Archives: teaching

## math is your superpower

Today was my last class of the semester before the final. In most of my courses, I give a fun talk about what professors really do outside of the classroom. I also go over my one (or seven) things that I want students to learn from me every semester. At the end of the talk, I tell students that while our world is becoming more complex and quantitative, math is often underused. * Math is a superpower*.

I once heard that the world runs on eighth grade math. I don’t think that is true for many industries (especially the ones that hire operations research graduates!), but a study compiled in a Northeastern University study shows that few Americans use advanced mathematics on the job [Link to the Jordan Weissman article in The Atlantic].

I remain optimistic about the need for advanced math. First, it’s possible that few workers use math because few workers are proficient in math. In fact, “Upper Blue Collar” workers are the most likely to use math. This should motivate us to teach math better, not to conclude that it isn’t needed. Second, it’s worth noting that the Northeastern study data is summarized across the workers surveyed (not across industries or companies). It’s certainly possible that nearly all companies perform statistics but that relatively few workers actually do the statistics (22% of upper white collar workers in the figure above) and that the average worker isn’t always aware of it.

The bottom line is that the survey suggests that relatively few workers do the hard number crunching, so there is a competitive advantage for those who are willing and able to do it. Math may not *really* be a super power, but it’s something that most workers do not get to enjoy on a regular basis.

## finding optimal marriage pairings using the assignment problem

The topic of today’s blog post is about optimally finding a spouse using optimization models (HT Anna Nagurney). This post is based on a paper published in EJOR entitled, “Optimizing the Marriage Market: An Application of the Linear Assignment Model,” and in it, researchers apply the linear assignment problem to identify how to optimally match potential (heterosexual) couples to find a new social optimum. While matching the couples is a textbook exercise, the researchers used a longitudinal dataset in Switzerland to identify meaning weights to assign to each potential pairing. They find that the actual marriages are far from optimal.

The weights are based on logistic regression models for predicting the likelihood of divorce from a longitudinal data set. The weights are based on four types of socioeconomic variables of each person in the set:

- Age
- Previous divorce (or not)
- Education (high or low)
- Nationality (Swiss, Western, or non-Western)

The weights for each pairing are not symmetric. For example, a wife is much more likely to divorce from a husband five years her junior than five years her senior.

The assignment problem is an integer programming model that produces the lowest cost one-to-one matching between two sets of items, such as individuals and jobs. Here, the two sets of items are men and women. The assignment problem is totally unimodular, and therefore, can be efficiently solved via the Hungarian algorithm.

Let:

*W*= set of women*M*= set of men (with |*W*|=|*M*|)- x{ij} = 1 if woman
*i*is matched to man*j*,*i*in*W*and*j*in*M* - c{ij} = the “cost” of matching woman i is to man j.

The optimization problem is:

A solution to the assignment problem admits exactly m=|M| = |W| variables with value 1 (the rest of the variables are zero). The structure here is a **bipartite graph**: one set of nodes represents the women and the other set of nodes represents the men. Every women is connected to all the men (and none of the women) and vice versa. There are m! possible matchings (corresponding to some permutation of possible pairings), and the assignment polytope has m! extreme points.

The Hungarian algorithm works by finding the reduced cost matrix, by first subtracting the smallest value in each row from the entire row. This is repeated for each row, leaving a zero in each row. Then, this is repeated over the columns. The resulting reduced matrix will have a zero in every column and every row, and all of its entries will be nonnegative. The optimal solution is identified by covering the zeros by adding lines row-wise and column-wise in a multi-step procedure.

I put together a small Excel spreadsheet with 9 men and women [Link to my Excel file and to the instructions], where I solve the assignment program. Please download and use in an introductory LP class.

The authors of the paper say that their method is an “innovative method of optimizing romantic partner allocation.” Of course, this is no way to find a partner for life. However, the authors point out that they could substantially improve marriage survival by reallocating 68% of the pairings. They conclude that “current marriage markets are suboptimally organized.” My Valentine’s Day wish to my readers is that you optimally organize your love life with or without the use of optimization models.

## a multiobjective decision analysis model to find the best restaurant in Richmond

I taught multiobjective decision analysis (MODA) this semester. It is a lot of fun to teach. I always learn a lot when I teach it. One of the most enjoyable parts of the class (for me at least!) is to run a class project that we chip away at during class over the course of the semester. **Our project is to find the best restaurant for us to celebrate at the end of the semester.** “Best” here is relative to the people in the class and the .

The project is a great way to teach about the MODA process. The process not only includes the modeling, but also the craft of working with decision makers and iteratively improving the model. It’s useful for students to be exposed to the entire analysis process. I don’t do this in my other classes.

On the first day of class, we came up with our objectives hierarchy. I did this by passing out about five Post It notes to each student. They each wrote one criteria for selecting a restaurant on each Post It note. They stuck their Post It notes to the wall. Together, we regrouped and organized our criteria into an objectives hierarchy. Some of the objectives because “weed out criteria,” such as making sure that the restaurant could accommodate all of us and comply with dietary restrictions.

Our initial criteria were:

- Distance
- Quality of food
- Variety of food
- Service: Fast service
- Service: Waiting time for a table
- Service: Friendly service
- Atmosphere: Noise level
- Atmosphere: Cleanliness
- Cost

Our final criteria were as follows (from most to least important):

- Quality of food
- Cost (tie with #3)
- Distance
- Fast service (tie with #5)
- Noise level
- Cleanliness

We removed variety of food, waiting time, and friendly service because classroom discussions indicated that they weren’t important compared to the other criteria. Variety, for example, was less important if we were eating delicious food at an ethnic restaurant that had less “variety” (variety in quotes here, because it depends on you you measure it).

In the next few weeks, we worked on identifying how we would actually measure our criteria. Then, we came up with a list of our favorite restaurants. During this process, we removed objectives that no longer made sense.

We collaboratively scored each of the restaurants in each of the six categories by using a google docs spreadsheet.

- Quality of food = average score (1-5 scale)
- Cost (tie with #3) = cost of an entree, drink, tax, and tip
- Distance = distance from the class (in minutes walk/drive)
- Fast service (tie with #5) = three point scale based on fast service, OK service, or very slow service
- Noise level = four point scale based on yelp.com ratings
- Cleanliness: based on the last inspection. Score = # minor violations + 4*# major violations.

A real challenge was to come up with:

- the single dimensional value functions that translated each restaurant score for an objective into a value between 0 and 1.
- the weights that balanced our preferences across objectives using swing weight thinking. FYI, we used an additive model.

I won’t elaborate on these parts of the process further. Ask me about these if you are interested.

**When we finished our model, the “best” decision was to forego a restaurant and do a potluck instead.** No one was happy with this. We examined why this happened. This was great: *ending up with a bad solution was a great opportunity for learning*. We concluded that we didn’t account for the hidden costs associated with a potluck. Namely, it would entail either making a trip to the grocery store or cooking, approximately a 30 minute penalty. We decided that this was equivalent to driving to a distant restaurant, a 26 minute drive in our model. It was also hard to evaluate cleanliness since the state do not inspect classrooms like they do restaurants. But since cleanliness didn’t account for much of our decision, we decided not to make adjustments there.

The final model is in a google docs spreadsheet.

We performed a sensitivity analysis on all of the weights. Regardless of what they were, most of the restaurants were dominated, meaning that they would not be optimal no matter what the weights were. The sensitivity was not in google docs, since we downloaded the document and performed sensitivity on our own. I show the sensitivity wrt to the weight for quality below. The base weight for quality is 0.3617. When the weight is zero and quality is not important, Chipotle would have been our most preferred restaurant. The Local would be preferred only across a tiny range.

We celebrated in Ipanema, a semi-vegetarian restaurant in Richmond. I think our model came up with a great restaurant. We all enjoyed a nice meal together. Interestingly, Mamma Zu scored almost identically to Ipanema (see the figure below).

I cannot claim credit for this fun class project. I shamelessly stole this idea from Dr. Don Buckshaw, who uses it in MODA short courses. We use the Craig Kirkwood’s *Strategic Decision Making* as the textbook for the course. I also recommend Ralph Keeney’s *Value Focused Thinking* and John Hammond’s *Smart Choices*.

How do you choose a restaurant?

## university offers zombie apocalypse course to teach students survival skills

Michigan State University plans to offer a zombie apocalypse course to teach students survival skills. The course will be offered by the School of Social Work. (Hat tip to Paul Rubin). The course won’t really teach students how to survive a zombie attack, rather, it uses a zombie apocalypse as a vehicle for teaching students about how to model catastrophic events and infectious diseases like pandemic flu.

The instructor talks about the course in the Youtube video below.

This has me convinced that I should develop a course on OR models for a zombie apocalypse.

I am planning to develop a similar course that teaches introductory OR modeling to undergraduates by way of applications in emergency preparedness and emergency response. I had envisioned covering more traditional disasters, such as hurricanes and earthquakes. Maybe I should think outside the box.

What topics would you offer in an OR course on the zombie apocalypse? I would start with population models using birth-death models and/or differential equations (see one of my previous posts on this topic) and then look at how to staff deputies or federal marshals to combat the zombie hoards.

I plan to talk about zombies, werewolves, and vampires in the stochastic processes course I am teaching this semester. Here is a previous exam question.

## how to find a football team’s best mix of pass and run plays using game theory

This is my third and final post in my series of football analytics slidecasts. After this one, just enjoy the Superbowl. My first two posts are here and here.

This slidecast illustrates how to find

- the offensive team’s best mix of run and pass plays, and
- the defensive team’s best mix of run and pass defenses.

**What is a football team’s best mix of running and passing plays?**