Category Archives: Uncategorized

the 30 most important seconds of your thesis defense

I’m on a lot of dissertation committees. While most of the committees are for students in my department, many are not in my area of operations research. I’m surprised at how hard it can be to follow along to the bigger picture and/or to the technical details. Even when I completely understand the technical details, I usually do not know enough about the specific research niche to characterize the dissertation’s contribution or novelty.

I tell students that the most important part of their thesis or dissertation defenses are the 30 seconds when they summarize the key contributions of their research at the beginning of the dissertation. I’ve been to defenses and proposal defenses where this has been unclear, and confusion follows. A lot of confusion.

The 30 second elevator speech is an important skill, because academics (and non-academics too) spend a lot of time trying to sell their ideas (literally!) to people with technical expertise in another field. The 30 second elevator speech is a necessary but not sufficient first step to communicating with others, and a thesis or dissertation is a great place to get started with this.

Additionally, all committee members want to understand what a student’s research is trying to accomplish and how it will fit into the literature. We need help to get there. Not all committee members seek to understand all the technical ideas, especially if they are outside your area, but we all want the Big Picture. Admittedly, guiding your committee through the Big Picture this will take more than 30 seconds, but doing so will lead to fewer questions later on.

A good thesis offense starts by hitting your committee with a 30 second elevator speech, not a sword. Thesis defense comic courtesy of xkcd.

Related posts:


in defense of model simplicity

Recently, I found a few interesting articles/posts that all defend model simplicity.

An interview with Gregory matthews and Michael Lopez about their winning entry in the Kaggle’s NCAA tournament challenge “ML mania” suggests that it’s better to have a simple model with the right data than a complex model with the wrong data. This is my favorite quote from the interview:

John Foreman has a nice blog post defending simple models here. He argues for sometimes replacing a machine learning model for clustering with an IF statement or two. He links to a published paper entitled “Very simple classification rules perform well on most commonly used datasets” by Robert Holte  in Machine Learning that demonstrates his point. You can watch John talk about modeling in his very informative and enjoyable hour-long seminar here.

A paper called “The Bias Bias” by Henry Brighton and Gerd Gigerenzer examines our tendency to build overly-complex models. Do complex problems require complex solutions? Not always. Here is the abstract.

In marketing and finance, surprisingly simple models sometimes predict more accurately than more complex, sophisticated models. Why? Here, we address the question of when and why simple models succeed — or fail — by framing the forecasting problem in terms of the bias-variance dilemma. Controllable error in forecasting consists of two components, the “bias” and the “variance”. We argue that the benefits of simplicity are often overlooked by researchers because of a pervasive “bias bias”: The importance of the bias component of prediction error is inflated, and the variance component of prediction error, which reflects an oversensitivity of a model to different samples from the same population, is neglected. Using the study of cognitive heuristics, we discuss how individuals and organizations can reduce variance by ignoring weights, attributes, and dependencies between attributes, and thus make better decisions. We argue that bias and variance provide a more insightful perspective on the benefits of simplicity than common intuitions that typically appeal to Occam’s razor.

What about discrete optimization models? 

All of these links address data science problems, like classifying data or building a predictive model. Operations research models are often trying to solve complicated problems with a lot of constraints and requirements. They have a lot of pieces that need to play nicely together. But even then, it’s often incredibly useful to ask the right question and then answer it using a simple model.

I have one example that makes a great case for simple models. Armann Ingolfsson examined the impact of model simplifications in models used to locate ambulances in a recent paper (see citation below). Location problems like this one almost always use a coverage objective function, where locations are covered if an ambulance can respond to the location in a fixed amount of time (e.g., 9 minutes). The question is how to represent the coverage function and how to aggregate the locations, two choices of model error. The coverage objective function can either reflect deterministic or probabilistic travel times. Deterministic travel times lead to binary objective function coefficients (an ambulance covers a location or is doesn’t) whereas probabilistic travel times lead to real-valued objective coefficients that are a little “smoother” with respect to distances between stations and locations (an ambulance can reach 75% of calls at this location in 9 minutes).

This paper examined which is worse: (a) a simple model with highly aggregated locations but realistic (probabilistic) travel times or (b) a more complex model with finely granulated locations but less realistic (deterministic) travel times.

It turns out that the simple but realistic model (choice (a)) is better by a long shot. Here is a figure from the paper that reflects the coverage loss (model error) from different models. The x-axis reflects aggregation, and the y-axis reflects coverage loss (model error, more is bad). The different curves reflect different models. The blue line is the model with probabilistic travel times; the rest have deterministic travel times with the binary value determined by different percentiles.

INSERT HERE

From the paper: “Figure 4 shows how relative coverage loss varies with aggregation level (on a log scale) for the five models, for a scenario with a budget for five stations, using network distances, and actual demand. This figure illustrates our two main findings: (1) If one uses the probabilistic model (THE BLUE LINE), then the aggregation error is negligible, even for extreme levels of aggregation and (2) all of the deterministic models (ALL OTHER LINES) result in large coverage losses that decrease inconsistently, if at all, when the level of aggregation is reduced”

From the conclusion:

In this paper, we demonstrated that the use of coverage probabilities rather than deterministic coverage thresholds reduces the deleterious effects of demand point aggregation on solution quality for ambulance station site selection optimization models. We find that for the probabilistic version of the optimization model, the effects of demand-point aggregation are minimal, even for high levels of spatial aggregation.

Citation:

Holmes, G., A. Ingolfsson, R. Patterson, E. Rolland. 2014. Model specification and data aggregation for emergency services facility location.  [Supplement] [Submitted, last revision March 2014.]

 

What is your favorite simple model?

Related posts:


it’s still safe to fly

Despite terrifying headlines like “2014 could be worst year for plane crash deaths in almost a decade,” it’s quite safe to fly. Operations research has played a role in demonstrating aviation safety over the years. Professor Arnie Barnett at MIT is a leading authority in aviation safety, and he has published several papers on this topic (see references below for four of them). He was recently on Voice of America in a 22 minute segment discussing aviation safety [Link here, HT @Supernetworks]. According to Barnett, flying in the first world was 100 times safer now than in the 1950s. Terrorism may be more of a threat to first world air safety than accidents. Most of Barnett’s papers focus on the safety associated with US domestic trunklines, however, some of his work has noted improvements in international safety.

The developing world is not quite as safe. However, Barnett nicely discusses benefits as well as costs. He points out that many things are not as safe in the developing world (drinking water, medical care, etc.) and that we should look at the entire safety of the trip and weigh that with the potential benefits of travel when making travel decisions. Likewise, there are potential solutions for improving air safety that may be too costly. Given limited budgets for things like (say) security, it generally makes sense to spend the budget on things that have the most impact. Barnett references RAND’s MANPADS study [Link] that concluded that “given the enormous cost of installing anti-missile systems compared with other homeland security measures, researchers suggest that officials explore less costly approaches in the near term while launching efforts to improve and demonstrate the reliability of the systems.”

This week, Arnie Barnett was also on More or Less on BBC Radio [Link]

Have the recent air events changed your willingness to fly domestically or internationally?

 

ON THE LINE: How Safe Are Our Skies ?

ON THE LINE: How Safe Are Our Skies ?

Barnett, A., Abraham, M., & Schimmel, V. (1979). Airline safety: Some empirical findings. Management Science25(11), 1045-1056.

Barnett, A., & Higgins, M. K. (1989). Airline safety: The last decade.Management Science35(1), 1-21.

Barnett, A. (2000). Free-flight and en route air safety: a first-order analysis.Operations research48(6), 833-845.

Czerwinski, D., & Barnett, A. (2006). Airlines as baseball players: Another approach for evaluating an equal-safety hypothesis. Management science,52(9), 1291-1300.

Air fatalities per year


land O links

Here are a few links for your holiday weekend reading:

  1. How to make mass transit sustainable once and for all by @trnsprttnst
  2. Why commute times don’t change much even as a city grows by @e_jaffe
  3. Blogging: is it good or bad for journal readership? The Incidental Economist weighs in.
  4. Harvard Business Review: Instinct can beat analytical thinking
  5. The hot hand fallacy: why we persist in seeing streaks
  6. The myth of the hot hand fallacy by @JSEllenberg
  7. Sports teams are immersed in “big data”
  8. Speaking of big data, an entire tumblr is devoted to cheesy pictures of Big Data (HT @mlesz1 )

This is what Big Data looks like. Maybe.


an analysis of punk rock OR on twitter

I wanted to analyze my tweets, so I did a little programming with the twitteR package on R, which helped me download my last 781 tweets or so (about 10% of my tweets) by calling the twitter API. Here is a wordcloud of the things I tweet about with a few common words like “the” and “that” removed. It looks like I spend a lot of time tweeting about #orms and Wisconsin to @jefflinderoth!

A wordcloud of things I tweet about.

A wordcloud of things I tweet about.

 

My 12 most favorited and/or retweeted tweets (of the last 781):


engineering achievements of the 20th century

Yesterday, I blogged about NAE grand challenges and how operations research can contribute to those grand challenges. You may find the list of 20th century engineering achievements interesting. The NAE’s full list of engineering achievements with an explanation for each item, can be found at www.greatachievements.org. Here is the brief list courtesy of the NAE publication The Bridge. The list is ordered according to importance.

  1. Electrification-Vast networks of electricity provide power for the developed world.
  2. Automobile-Revolutionary manufacturing practices made cars more reliable and affordable, and the automobile became the world’s major mode of transportation.
  3. Airplane-Flying made the world accessible, spurring globalization on a grand scale.
  4. Water Supply and Distribution-Engineered systems prevent the spread of disease, increasing life expectancy.
  5. Electronics-First with vacuum tubes and later with transistors, electronic circuits underlie nearly all modern technologies.
  6. Radio and Television-These two devices dramatically changed the way the world receives information and entertainment.
  7. Agricultural Mechanization-Numerous agricultural innovations led to a vastly larger, safer, and less costly food supply.
  8. Computers-Computers are now at the heart of countless operations and systems that impact our lives.
  9. Telephone-The telephone changed the way the world communicates personally and in business.
  10. Air Conditioning and Refrigeration-Beyond providing convenience, these innovations extend the shelf-life of food and medicines, protect electronics, and play an important role in health care delivery.
  11. Highways-44,000 miles of U.S. highways enable personal travel and the wide distribution of goods.
  12. Spacecraft-Going to outer space vastly expanded humanity’s horizons and resulted in the development of more than 60,000 new products on Earth.
  13. Internet-The Internet provides a global information and communications system of unparalleled access.
  14. Imaging-Numerous imaging tools and technologies have revolutionized medical diagnostics.
  15. Household Appliances-These devices have eliminated many strenuous, laborious tasks, especially for women.
  16. Health Technologies-From artificial implants to the mass production of antibiotics, these technologies have led to vast health improvements.
  17. Petroleum and Petrochemical Technologies-These technologies provided the fuel that energized the twentieth century.
  18. Laser and Fiber Optics-Their applications are wide and varied, including almost simultaneous worldwide communications, noninvasive surgery, and point-of-sale scanners.
  19. Nuclear Technologies-From splitting the atom came a new source of electric power.
  20. High-performance Materials-They are lighter, stronger, and more adaptable than ever before.

I find it interesting that OR hasn’t obviously contributed to these 20th century achievements. The 20th century achievements celebrate making things, not improved systems. Our world is becoming increasingly more complex and interconnected – and this sometimes makes us more vulnerable and fragile. This is reflected in the list of 21st century challenges. We need operations research to improve connections, ensure efficiency, and introduce resilience. As highlighted in the NSF-sponsored report in yesterday’s post, OR will clearly make important contributions to 21st century challenges.

Last semester I team-taught a course to freshman about engineering grand challenges. The idea was to talk about a theme (mine was Mega-cities) that cuts across all engineering disciplines to help students pick a major. It was interesting to talk about how during their careers, they will solve problems that we don’t know that exist. We talked about the 20th century achievements as a springboard for talking about what awaits us in the 21 century.

I sometimes tell my students that the world runs on eighth grade math – many important systems are shockingly simplistic and there is plenty of room to apply operations research to make things work better. This isn’t universally true, many systems are becoming more complex and interconnected, and eighth grade math no longer cuts it. Higher education and graduate education is needed just to keep up.

The Society for Industrial and Applied Mathematics (SIAM) published a list of the top 10 algorithms in the 20th century [Link] in chronological order. The simplex algorithm is on the list (obviously!), despite George Dantzig being teased for assuming the world is linear.

  1. the Monte Carlo method or Metropolis algorithm, devised by John von Neumann, Stanislaw Ulam, and Nicholas Metropolis;
  2. the simplex method of linear programming, developed by George Dantzig;
  3. the Krylov Subspace Iteration method, developed by Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos;
  4. the Householder matrix decomposition, developed by Alston Householder;
  5. the Fortran compiler, developed by a team lead by John Backus;
  6. the QR algorithm for eigenvalue calculation, developed by J Francis;
  7. the Quicksort algorithm, developed by Anthony Hoare;
  8. the Fast Fourier Transform, developed by James Cooley and John Tukey;
  9. the Integer Relation Detection Algorithm, developed by Helaman Ferguson and Rodney Forcade; (given N real values XI, is there a nontrivial set of integer coefficients AI so that sum ( 1 <= I <= N ) AI * XI = 0?
  10. the fast Multipole algorithm, developed by Leslie Greengard and Vladimir Rokhlin; (to calculate gravitational forces in an N-body problem normally requires N^2 calculations. The fast multipole method uses order N calculations, by approximating the effects of groups of distant particles using multipole expansions)

What is your favorite 20th century OR contribution? What is your favorite anecdote about a complex system relying on eighth grade math?

 


engineering grand challenges that operations research can help solve

In May, the report Operations Research – A Catalyst for Engineering Grand Challenges was released to the National Science Foundation [grant info here]. The report outlines operations research grand challenges for the next century, and they reflect the National Academy of Engineering’s list of grand challenges [Link]. This committee worked on a project funded by the NSF, and it was a great idea for highlighting the importance of operations research in relation to other STEM fields with regard to solving important societal problems as well as for prioritizing directions for our field.  The report was written by a committee composed by:

  • Suvrajeet Sen, Chair, University of Southern California
  • Cynthia Barnhart, Massachusetts Institute of Technology
  • John R. Birge, University of Chicago
  • E. Andrew Boyd, PROS
  • Michael C. Fu, University of Maryland
  • Dorit S. Hochbaum, University of California -Berkeley
  • David P. Morton, University of Texas-Austin
  • George L. Nemhauser, Georgia Institute of Technology
  • Barry L. Nelson, Northwestern University
  • Warren B. Powell, Princeton University
  • Christine A. Shoemaker, Cornell University
  • David D. Yao, Columbia University
  • Stefanos A. Zenios, Stanford University

Executive summary. The growth and success of Operations Research (OR) depends on our ability to transcend disciplinary boundaries and permeate the practices of other disciplines using ideas, tools, and experience of the OR community. This report is intended to continue the tradition of transcending disciplinary boundaries by using the U.S. National Academy of Engineering’s (NAE) Engineering Grand Challenges as a source of inspiration for the OR community. Our goal is to view these challenges as an opportunity for the OR community to play the role of a catalyst – utilizing OR to facilitate some pressing technological challenges facing humanity today.

A panel of thought-leaders convened by the NAE (and facilitated by NSF) unveiled its vision of the Engineering Grand Challenges in 2008. Over the past several years, this report has invited (and received) feedback from international leaders and professional organizations, including the Institute for Operations Research and the Management Sciences (INFORMS). As input from the OR community, several past Presidents of INFORMS prepared a white paper, an abbreviated version of which appeared as the President’s Column in OR/MS Today (April 2008). As predicted, the OR community has been active in many of the thematic areas of the NAE Grand Challenges via publications in topical research areas of our flagship journals, joint major conferences, and other collaborative efforts. The question of whether there are ways to dovetail OR with these challenges is not the issue. Of importance is whether there is a need to introduce greater structure for research and exchange between domain experts in core areas of the engineering Grand Challenges and the OR community.

In order to accelerate the growth, this report recommends a two-pronged approach: (1) An NSF announcement of “Grand Challenge Analytics” as a major EFRI topic, and (2) an NSF sponsored institute for “Multidisciplinary OR and Engineering” which will be dedicated to coalescing a general-purpose theory, as well as building a community to support “Grand Challenge Analytics”. Together, these initiatives are likely to unleash a vast array of methodologies onto the engineering Grand Challenges of today. Such an effort could be likened to the manner in which the interface between OR and computer/communications science/engineering has propelled the development of the Internet. Similarly, the long-standing exchanges between the INFORMS and Economics communities has produced deep results, many of which have been honored by the Nobel Prize in Economics. Drawing upon such successes, we propose a new era in which the OR community reaches out to domains that are more directly connected to the NAE Grand Challenges. This more structured approach, driven by NSF sponsorship of research and thematic exchanges (workshops), will result in well-defined outcomes, leading to a strong foundation for the NAE Grand Challenges.

Challenges areas from the report:

  1. OR: A General-Purpose Theory of Analytics
    “The time has come to engage both domain experts as well as OR experts, so that policies/decisions become an integral part of analysis, not an afterthought.”
  2. OR for sustainability
    “The Earth is a planet of finite resources, and its growing population currently consumes them at a rate that cannot be sustained. Utilizing resources (like fusion, wind, and solar power), preserving the integrity of our environment, and providing access to potable water are the first few steps to securing an environmentally sound and energy-efficient future for all of mankind.”
  3. OR for security
    “As our interconnected systems grow in complexity, having a trusted operational model is even more essential for assessing system vulnerabilities and, in turn, addressing the challenge of how to secure that system.”
  4. OR for human health.
    Also see my last blog post on healthcare challenges – I’m glad the White House and the OR community agree with this one!
    “One of the most significant problems facing the health care system is keeping costs under control while providing high levels of service. Doing so requires a careful analysis of costs and benefits, but as Kaplan and Porter (2011) argue, “The biggest problem with health care is that we’re measuring the wrong things the wrong way.” “
  5. OR for Joy of Living
    “For example, reducing traffic congestion in urban areas, improving response times of first-responders, designing smart, energy efficient homes, and others raise many novel OR questions. One such example is an application related to predicting movie recommendations associated with the so-called “Netflix Prize” problem. Other “joys of life,” such as sports, have also seen many applications of analytics; in addition to the well publicized baseball movie “Moneyball,” there is Major League Baseball scheduling which is done routinely using OR models. In this sense, OR casts such a wide net in the “Joy of Living” area, that the following subsections (pertaining only to the NAE Grand Challenges) explicitly discuss only a small subset of applications for “Joy of Living.” “

Report Recommended Actions

Action 1. NSF should announce an EFRI (Emerging Frontiers in Research and Innovation) topic for “Grand Challenge Analytics”. These proposals should not only be judged according to the impact on a Grand Challenge problem, but also on the novel methodology that will be developed as a result of the research. EFRI is a well-established program within NSF, and given the ground work of this report, we believe that NSF program officers will find it relatively straightforward to craft a RFP on this topic.

Action 2. Concurrently with Action 1, we recommend the formation of an Institute which will invite both EFRI-funded researchers as well as others from the field to participate in workshops which will explore common themes resulting from “Grand Challenge Analytics” projects. These workshops will not only help cross-fertilization between projects, but also help develop a general-purpose theory of analytics.

My Recommend Actions

Submit your student paper to the INFORMS Doing Good with Good OR student paper competition next year

Submit your paper to the INFORMS Section of Public Programs, Services, and Needs Best Paper Competition (due on June 15!)

 

What do you think of the OR grand challenges?


health care is a systems engineering problem

A new report by the  President’s Council of Advisors on Science and Technology (PCAST) is all about how health care needs systems engineering solutions [Press release here]. The report entitled Better Health Care and Lower Costs: Accelerating Improvement through Systems Engineering outlines the various ways in which industrial and systems engineering can help. Several OR methods and tools are listed in the report, including operations management, queuing theory, simulation, supply-chain management.

Rising healthcare costs are the motivation for this report. The United States spends more (much more!) for healthcare than any other country.

Healthcare costs by country, courtesy of the WSJ. “In 2011, the most recent year in which most of the countries reported data, the U.S. spent 17.7% of its GDP on health care, whereas none of the other countries tracked by the OECD reported more than 11.9%. And there’s a debate about just how well the American health-care system works. As the Journal reported recently, Americans are living longer but not necessarily healthier .”

Healthcare costs are expensive and rising in every country, but they are rising in the US much faster than any other country on the planet. It is unsustainable. If we forecast healthcare costs to our children and grandchildren, we can easily imagine a future where we spend so much on healthcare that we cannot sustain other important programs that benefit society (like education!).

Growth in healthcare costs is higher in the US than in other countries.

The report addresses the healthcare cost problem:

This report comes at a critical time for the United States. Health-care costs now approach a fifth of the U.S. economy, yet a significant portion of those costs is reportedly “unnecessary” and does not lead to better health or quality of care. Millions more Americans now have health insurance and therefore access to the health care system as a result of the Affordable Care Act (ACA). With expanded access placing greater demands on the health-care system, strategic measures must be taken not only to increase efficiency, but also to improve the quality and affordability of care.

Other industries have used a range of systems-engineering approaches to reduce waste and increase reliability, and health care could benefit from adopting some of these approaches. As in those other industries, systems engineering has often produced dramatically positive results in the small number of health-care organizations that have implemented such concepts. These efforts have transformed health care at a small scale, such as improving the efficiency of a hospital pharmacy, and at much larger scales, such as coordinating operations across an entire hospital system or across a community. Systems tools and methods, moreover, can be used to ensure that care is reliably safe, to eliminate inefficient processes that do not improve care quality or people’s health, and to ensure that health care is centered on patients and their families. Notwithstanding the instances in which these methods and techniques have been applied successfully, they remain underutilized throughout the broader system.

It makes 7 main systems engineering recommendations:

  • Recommendation 1: Accelerate the alignment of payment incentives and reported information with better outcomes for individuals and populations.
  • Recommendation 2: Accelerate efforts to develop the Nation’s health-data infrastructure.
  • Recommendation 3: Provide national leadership in systems engineering by increasing the supply of data available to benchmark performance, understand a community’s health, and examine broader regional or national trends.
  • Recommendation 4: Increase technical assistance (for a defined period—3-5 years) to health-care professionals and communities in applying systems approaches.
  • Recommendation 5: Support efforts to engage communities in systematic healthcare improvement.
  • Recommendation 6: Establish awards, challenges, and prizes to promote the use of systems methods and tools in health care.
  • Recommendation 7: Build competencies and workforce for redesigning health care.

Markov chains for ranking sports teams

My favorite talk at ISERC 2014 (the IIE conference) was “A new approach to ranking using dual-level decisions” by Baback Vaziri, Yuehwern Yih, Mark Lehto, and Tom Morin (Purdue University) [Link]. They used a Markov chain to rank Big Ten football teams in their ability to recruit prospective players. Players would accept one of several offers. The team that got the player was the “winner” and the other teams were losers.  We end up with a matrix P where element (i,j) in P is the number of times team j beats team i.

The Markov chain is then normalized so that each row sums to 1 and solved for the limiting distribution. The probability of being in team j in the limit was interpreted as meaning the proportion of time that team j is the best. Therefore, the limiting distribution can be used to rank teams from best to worst.

They found that using this method with 2001 – 2012 data, Wisconsin was ranked fourth, which was much higher than it was ranked by experts and explains why they have been to 12 bowl games in a row. Illinois (my alma mater) was ranked second to last, only above lowly Indiana.

I used this method regular season 2014 Big Ten basketball wins and ended up with the following ranking. I also have the official ranking based on win-loss record for comparison.  We see large discrepancies for only two teams: Michigan State (which is over-ranked according to its win-loss record) and Indiana (which is under-ranked according to its win-loss record). The Markov chain method ranks these two teams differently because Indiana had high quality wins despite not winning so frequently and because Michigan State lost to a few bad teams when they were down a few players due to injuries.

 

Ranking MC Ranking W-L record  Ranking
1 Michigan Michigan
2 Wisconsin Wisconsin
3 Indiana Michigan State
4 Iowa Nebraska
5 Nebraska Ohio State
6 Ohio St Iowa
7 Michigan St Minnesota
8 Minnesota Illinois
9 Illinois Indiana
10 Penn St Penn State
11 Northwestern Northwestern
12 Purdue Purdue

Sophisticated methods are a little more complex than this. Paul Kvam and Joel Sokol estimate conditional probabilities in the transition probability matrix for the logistic regression Markov chain (LRMC) model using logistic regression [Paper link here]. The logistic regression yields an estimate for the probability that a team with a margin of victory of x points at home is better than its opponent, and thus, looks at margin of victory not just wins and losses.

 


land O links

Assorted links.

  1. How to use math to crush your friends at Monopoly like you’ve never done before
  2. The NFL uses Gurobi to set the NFL schedule.
  3. John Foreman (@john4man) has a blog post on modeling and simplicity. The post is about AI models such as classifers but is more widely applicable (HT @HarlanH)
  4. This “Mathematical Dialect Quiz” is a lot of fun.
  5. Everything is a sensor for everything else
  6. How to marry the right girl. A mathematical solution.

Follow

Get every new post delivered to your Inbox.

Join 2,128 other followers