**The games:** ** **

1. Holiday Bowl (December 28th): California (Berkeley) v. Texas

2. Alamo Bowl (December 29th): Washington v. Baylor

3. Chick-fil-A Bowl (December 31st): Virginia v. Auburn

4. Gator Bowl (January 2nd): Ohio State v. Florida

5. Outback Bowl (January 2nd): Michigan State v. Georgia

6. Capital One Bowl (January 2nd): Nebraska v. South Carolina

7. Fiesta Bowl (January 2nd): Stanford v. Oklahoma State

8. Rose Bowl (January 2nd): Wisconsin v. Oregon

9. Sugar Bowl (January 3rd): Michigan v. Virginia Tech

10. Orange Bowl (January 4th): West Virginia v. Clemson

11. Cotton Bowl (January 6th): Kansas State v. Arkansas

12. BCS National Championship Game (January 9th): Alabama v. LSU

**The Twist:**

*Note: Examples below use last year’s matchups*

Rather than merely picking the winner of each game you will need to forecast the chance of each team winning each game. For instance instead of predicting:

*Auburn beats Oregon in that National Championship Game*

*Wisconsin beats TCU in the Rose Bowl*

you will need to say something along the lines of:

*Auburn 57%, Oregon 43%*

*Wisconsin 75%, TCU 25%*

Meaning that you think Auburn has a 57% chance of winning the National Championship and that Wisconsin has a 75% of winning the Rose Bowl.

**The scoring system (RMSE):**

The winning forecaster will be determined by whose predictions have the **lowest** root mean square error (RMSE).

The “errors” are the differences between the predicted result and the actual result. For instance, if a forecaster projects that Oregon has a 55% chance of winning and they lose, that’s an error of 55% or 0.55. If, on the other hand, this forecaster predicts Oregon has a 55% chance of winning and they do win, they have an error of 45% (100% – 55%) or 0.45.

To remember how to calculate root mean square errors, you can read the phrase inside out (“square errors, take mean, then root”):

1. Square the errors for each of the games.

2. Find the mean or average of these squared errors.

3. Take the square root of this mean.

Here are three sample forecasters systems and how they would score:

1. **The Monkey System:** assigns each team a 50% chance in each game.

The Monkey’s error in each game is 0.5 (50%). So, in each game the Monkey’s “square error” is .5^{2} = 0.25. The mean of the square errors is, of course, 0.25 and (square) root of the mean square error is sqrt(0.25) = **0.5**. (Note: when all of the errors are the same size, the RMSE is the same as the average error. In all other situations, it is larger than the average error.) Notice that the Monkey’s RMSE will always be 0.5 regardless of the outcomes of the games.

2. **The Grasshopper:** assigns all of his chosen winners a 100% chance of winning and correctly picks the winner in 8 out of 12 games.

The Grasshopper has no error in its eight “wins” but and error of 1 (100%) in each of the four losses. Therefore, the Grasshopper had a square error of 1 in four games and 0 in 8 other games for a mean square error of 4/12 = 0.333. The RMSE is sqrt(0.333) = **0.577.**

** **

Despite the fact that the Grasshopper correctly picked 8 of the 12 winners, the Monkey beat the Grasshopper (lower RMSE)!** **Why? The Grasshopper thought that each of its favorites had a 100% chance of winning. The Monkey thought these teams had a 50% chance of winning. The Grasshopper’s picks actually won 8 of 12 or 67% of the games. The monkey was closer to the truth.

3. **The Ant:** assigns all of his chosen winners a 60% chance of winning the game and correctly picks the winner in 8 of 12 games.

The Ant has 0.4 or 40% (100% – 60%) error in the 8 wins and 0.6 or 60% (60% – 0%) error in the 4 losses. In each of the eight wins the Ant has a square error of 0.4^{2} = 0.16 and in each of the four losses the Ant has a square error of 0.6^{2} = 0.36. The mean of these squared errors is (8*0.16 + 4*0.36)/12 = 0.227. The Ant has a root mean square error of sqrt(0.227) = **0.476.**

** **

**Unlike the over-confident Grasshopper, the cautious Ant beats the Monkey because the Ant had a better understanding of the uncertainty of each pick.**

**Useful Models:**

A number of mathematics, scientists, writers and coaches have rated these teams. Some of them have used formulas to determine team quality and others have been more subjective.

Here are some resources you might want to look at before making your picks:

**Computer Rating Systems:**

Anderson & Hester

Richard Billingsley

Colley Matrix

Massey Ratings

Wolfe Ratings

Sagarin Ratings

Dolphin Ratings

**Polls:**

Harris Poll

AP Poll

USA Today Coaches Poll

*Note on gambling lines: you may see lines that looks like the following.*

California +140

Texas -160

These values designate underdogs and favorites. In this case, Texas is favored (-160) and California is the underdogs (+140).

*Because Texas is favored you must risk $1.60 (-160) in order to win $1.00. On the flip side, you must risk $1.00 in order to win $1.40 (+140) on California.
*