As part of our annual Countdown to XC, I keep and update yearly stats on how much a program improves from the end of one XC season to the end of the next. I call this metric Improvement Rating, and I have 6 years of data for both 5K and 3 mile races, covering 70+ of the state's top programs over that time span. Every summer I add another year of data as well as more schools, using returning rankings to find "new" programs to include.
Here are the factors included in my Improvement Ratings:
- Year-to-year improvement: I use a team's top 5 returning runners from the end of one season as the baseline - it serves as a projected top 5 for the next season without any improvement factored in. Then I compare that to the school's actual top 5 at the end of the next season. Improvement is listed as a positive (the team is faster than their projected top 5), and decline is listed as a negative (the team's actual top 5 turned out slower than their projected top 5). I can then calculate average improvement over a 3-year or 5-year time period (and next year I will have 7-year data as well).
- Strength of the program: My premise here is that it is harder to make big improvements with a team that is already well-trained, as individual runners tend to make gains more slowly once they have reached a high level. In other words, it is harder for a defending state champion to improve by 30 seconds across their top 5 than it is for an average team to do the same. I also adjust for school size, as I think it is easier for a large school to make big gains (by discovering new runners or developing from a deeper talent pool) than it is for a program with a much smaller population from which to draw. To reflect this, I divide the yearly improvement numbers by the team's place in their respective state meet, which reflects the success of the program in competing against schools of similar size. Teams that do not make the state meet are given a place of 24, since it would be impossible to accurately rank them without that direct head-to-head competition. In the final Improvement Rating, I actually use the square root of their state meet place, which creates a better range of data to highlight differences between programs. Again, yearly place at the state meet is averaged over 3-year and 5-year time periods.
When put together, this creates numbers that range from zero into the 30's, with higher numbers reflecting better year-to-year improvement. Some other notes about Improvement Ratings:
- Since different teams compete on courses with varying difficulty levels, the stat is not geared to compare one program to another. Instead, it compares a program's outcome from a season to the same program's projected strength coming into the season. However, if a team changes their schedule a lot from year to year, instead of racing on mainly the same courses, that could render their Improvement Rating less accurate.
- Additions of new top 5 runners (through transfers, new freshmen, or just big improvements from an individual) and key losses (injuries, illness, transfers, etc) with both be reflected in the averages for the team. Therefore, the best programs in Improvement Ratings will be those that keep athletes healthy and improving over their careers, and those that attract athletes who want to join the program. That seems right in line with the concept of "best at improving from season to season."
- It would be impossibly time-consuming to do this for all programs in the state, so I have only included teams that were considered for each year's top 30 over the last 3 years of the countdown. I am sure there are other programs out there that have demonstrated significant improvement over the last 3 or 5 years. Feel free to recognize them in the comments or shoot us an email!
- The 3-year Improvement Rating is a better predictor of a program's likely progress this season, while the 5-year Improvement Rating indicates long-term success. After all, if a school can perform at a high level for 5 years, we know it wasn't just a single class of talented individuals driving the improvement!
- Our performance data has become deeper and more complete over the last 3 years, but before that time it was inconsistent. As a result, some teams whose races are missing from the database might not show up in the 5-year Improvement Ratings. This will improve as we keep adding new years with more robust data.
Over the following slides, you can see the teams with the top 25 Improvement Ratings in California, using 5K and 3 mile races and over 3-year and 5-year time periods. I learn a lot about the Golden State's best programs while compiling and analyzing this data, and I hope you find it informative and interesting, too!