Taking a quiet little victory lap with FEI this year.
I don't engineer FEI to make projections, I engineer it to most accurately represent past results. And it does quite well in that regard relative to other computer systems:
But I also produce game projections based on FEI, which I use mostly as a calibration exercise. My hypothesis is that a system that is well-tuned to past results should be well-tuned to the "true" strength of teams, and therefore should do reasonably well forecasting future outcomes.
Well, from Week 14 (when I had phased out all preseason projection data) through the end of the playoff, FEI game projections picked 88 out of 123 game winners outright (71.5%) while the market (per closing lines) only picked 80 correct game winners (65.0%) over the same stretch. FEI projected winners were 11-0 in the playoff (closing line favorites went 10-1), with a mean absolute error of only 6.7 points per game (closing line error was 8.9). This also compared very favorably with other computer systems, most of which (like SP+) are designed to forecast game outcomes by design: https://public.tableau.com/app/profile/andrew.percival/viz/CFBPicker/Standings