**Definitions and General Information**

The official definition of Wilson's College Football Performance Rating is: A team's rating is the average of its opponents' ratings plus 100 for a win or minus 100 for a loss. Wins that lower the rating and losses that raise the rating count one twentieth as much as the other games. Post-season games count double. |

The official definition of the Wobus Rating is: Each team's rating equals its number of relevant wins, minus its number of relevant losses, plus the sum of the ratings of its relevant opponents, all divided by one more than its number of relevant opponents. ("Relevant" means the opponent's rating is within 1.0 of the team in question.) |

**Non-Mathematical Explanation ("How the heck are you doing this?")**

A team's rating is based on

*its number of wins and losses in games played to
date.
*Further, each team's rating is adjusted based on

Thus, each team's rating is partially dependent on their strength of schedule. The written formulas above are turned into computer code, and that gets used to determine the ratings. A computer program produces all the teams' ratings at once; since we don't initially know how strong each opponent is, the definition is used over and over again until the rating of each team agrees with all the other teams.

**Somewhat More Mathematical Explanation**

Each team is assigned a "default" initial rating. Then, we
calculate new ratings, based on the wins, losses, and opponents of each team.
This changes the initial rating, of course. We go through all the teams
again, using their wins and losses, and the *new* ratings from the previous
calculation.* *We keep going through all the teams (a process called
iteration) until the changes from step to step go to zero. The ratings will eventually converge (see
Colley's explanation for a
more eloquent discussion of rating system convergence); Wilson's system usually
does so within about 1000 iterations, Wobus within about 100. As the
season progresses, the two methods produce very similar results (correlations >
0.98).

**Basic Premise of a Rating System, and some General Thoughts**

As in any system, the premise here is: *points are awarded for winning,
and taken away for losing*.

See the page by John Wobus for a good discussion of computer ratings vs. human polls, as well as discussion of the "RPI" used in college basketball. A comprehensive list of college football ratings is available from David Wilson.

Ratings are based **solely upon who beat whom:**

- The score of the game is not important.
- The computer does not consider where the game was played.
- Weather conditions, injuries, or "how a team looked" are irrelevant.
- Rivalries, prestige, and previous year's results mean nothing.
- A team is rated s
*imply on the games played in the current season, and who won and lost those games.*I can't stress this enough!

I make no claim that either of these statistical methods is superior to the
other, or to the appropriateness of their use. In all honesty, the reason
I provide both is because I *like them both*. I am intrigued that two
different formulas can produce similar results. The authors have been
gracious to let me use their source code, and are happy to see their work
getting some use. To Messrs. Wilson and Wobus, thank you!!

I believe there is a better way to seed (and perhaps even determine) playoff teams. I'd like to see something not based on the politics of assigning areas and regions and classes, but instead based on an impartial evaluation of team strength. Put more succinctly, I'd like to see our playoff teams determined by some sort of ranking system. The state of Ohio has been doing this for a number of years, but I dare say that Alabama is well over a decade away from moving to a system like this--if ever. Coaches and principals don't exactly welcome radical changes. We'll see.