The network rankings told us a lot about the inadequacies of speculation dominant in other approaches, as well as the dangers of formal models based on limited and incomparable observations. This ranking has never placed USC, Alabama, Oregon, or Kansas State at #1 (yet) not because it is a predictive genius, but because the evidence was simply never consistent with media hype.
As a comparison going into this week:
BCS: Kansas State #1, Oregon #2
AP Poll: Oregon #1, Kansas State #2
Coaches Poll: Oregon #1, Kansas State #2
Network Rankings: Oregon #5, Kansas State #8
Let's be clear, no one could have predicted what happened on Saturday with Oregon and Kansas State. In the pdf explanation linked below, I am very explicit that the rankings are not intended to make predictions and attempting to predict is the problem with the other approaches. What it does tell us is that rankings based on speculation are dangerous and inaccurate. However, what the Network Rankings show is that based on the evidence going into this weekend, Oregon and Kansas State had not demonstrated they were #1 and #2.
The BCS Rankings released later today will most likely have Notre Dame #1 and Alabama #2, which are what the Network Rankings had last week and this week (Ohio State isn't bowl eligible). The Network Rankings have had Notre Dame #1 or #2 since the first week all the teams in the FBS were connected to one another by wins and losses (Week 7).
Notes:
- I've gotten some great question from people about how this all works and I'll keep addressing them. Keep the questions coming. One of the motivations for doing this is the lack of transparency in existing approaches. This ranking is intended to be both conceptually and methodologically simple and clear to anyone interested in college football.
- Why do you get extra points just because you've played more games? Shouldn't you standardize the ranking by games played? This is a question sparked by my comment a few weeks back that Ohio State had played one more game than other top ranked teams resulting in a slightly higher ranking, and other teams were likely to catch up after Ohio State's bye week. Methodologically, this isn't inherently a bad idea, and something I've thought about a lot. My answer at the moment, however, is no. The underlying rationale behind the rankings is to base a rank purely upon a team's body of work. The value of the approach is it uses the network of wins and losses between teams in the FBS to determine the relative value of each team's "work". If you've won more games, you've done more work, and your ranking should reflect that additional work scaled by the quality of the teams you defeated (which is in turn determined by their "work", see the pdf). To be clear, the number next to each team is the "average reciprocal distance" across the network of wins minus the network of losses - an indicator of how central a team is in the web of college football victories - or in other words, their total body of work.
- Well, what about conference championships? The above answers the conference championship question. If you are in a conference that has a conference championship, and you win it, you should be ranked higher. Why? You've done more work and you've earned it - and that's the point of this exercise. I doubt anyone would say that if you are forced to play one more game than everyone else and that game is against a team like Alabama or Georgia and you win it, that somehow that game shouldn't count.
- What about FCS victories or losses? This is also a great question. To my knowledge, all of the rankings essentially ignore FCS games. The conceptual reason is pretty simple, the purpose of the ranking is to determine the best team in the FBS, and so the ranking is limited to FBS games. Alabama doesn't win anything for crushing the Catamounts. So if the goal is to rank FBS teams, this is fine. If a team loses to an FCS team, yes, they should be punished severely, and by ignoring FCS teams we're not capturing that. But in practice, this is rare, and more importantly it is unlikely to effect anything as a team that loses to an FCS team is probably going to lose to a lot of FBS teams in their regular schedule. So, we're alright considering these games irrelevant exhibition activities. That said, I'm not conceptually opposed to an undertaking where the goal is to create a full ranking of all college football teams, where every game, regardless of division, is an interesting piece of evidence. Hypothetically speaking, if we were to do this, I'm sure there are a number of FCS teams that are better than the bottom ranked FBS, and it would be fun to see how that breaks down. But, if we're going to be non-arbitrary, we'd have to go a few steps further. FCS teams schedule I-AA teams, so we'd have to include them under the same logic of including FCS teams because they're scheduled by FBS teams. I-AA teams schedule I-AAA teams, so we'd have to include them as well. Before you know it, we now have hundreds of football teams in our ranking, from Alabama to the Rhodes College Lynx-cats. I think this would be a fun exercise, but unless someone wants to fund this project enough to hire a few people to code all the wins and losses each weekend, I don't have the time on Sundays to do it myself. Time constraints aside, it is extremely unlikely (if not impossible) that it would meaningfully effect the top 25 teams in the FBS.
Explanation: A Simple and Fair Ranking Based on Wins and Losses
Top 25 (Full Rankings)
Each team's value of wins minus value of losses in parentheses.
- Notre Dame (51.37)
- Ohio State (44.36)
- Alabama (44.02)
- Florida (43.44)
- LSU (41.7)
- Georgia (40.25)
- Texas A&M (38.83)
- South Carolina (37.93)
- Mississippi State (31.91)
- Oregon (27.33)
- Stanford (26.26)
- Kansas State (24.85)
- Oklahoma (22.88)
- Oregon State (22.87)
- Texas (20.59)
- Nebraska (19.37)
- Clemson (18.18)
- Michigan (17.97)
- UCLA (17.53)
- Arizona (17.41)
- Rutgers (17.03)
- Iowa State (16.77)
- Washington (16.21)
- San Jose State (14.69)
- Baylor (14.58)