blog title img

Predicting Snowboard Results with Machine Learning

Published: Dec 23, 2023

Introduction

Welcome to the inaugural issue of Wyldata Insights, where our goal is to share how we're connecting the worlds of action sports and big data.

In this post, we’re going to dive into some of the efforts we have put into developing a predictive model with the objective of forecasting the outcome of snowboarding events.

Snowboarding, unlike traditional sports like football or basketball, lives on creativity and personal style. Here, athletes aren't just scoring points; they're impressing a panel of judges. While there are guidelines, the final results depend on the judges' expertise and personal views. This unique aspect of snowboarding makes it a fascinating area to explore the potential of predictive algorithms. Keep reading to learn how we have been approaching this topic, and to see some first results from recent events.

Motivation

Many action sports, like freestyle snowboarding and skiing have undergone a dramatic shift over the last ten years. Once perceived as lifestyle sports synonymous with rebellion and a "sex, drugs, and rock n' roll" culture, they have matured into professional sports featured in multiple Olympic Games. Despite this evolution, the way data from these sports is managed hasn't kept pace with the athletes' and federations' increased professionalism.

At Wyldata, we're committed to bringing clarity to this chaos. Our aim is to introduce a methodical, data-driven approach to action sports. This could revolutionize how athletes and fans engage with their favorite sports, revealing new possibilities and enhancing their overall experience.

One of our standout initiatives is focused on predictive modeling. The concept is straightforward: For each competition, we consider the participating riders and aim to generate a realistic forecast of the final standings. This practical application of our data showcases how technology can be leveraged to develop innovative products. These products have the potential to transform how events are experienced, offering new perspectives and deeper engagement.

Predictions without Machine Learning

When we began experimenting with prediction models, we didn't immediately dive into the complexities of Machine Learning. Instead, we developed our own algorithm to estimate an athlete's ranking in a competition based on their individual past career performance.

What influences a result?

Our goal is to predict where an athlete will place at the end of a competition. Although a rank is simply a number, numerous factors influence it. When considering an athlete's potential performance in a competition, several questions come to mind:

  • What is the skill level of the other competitors?

  • Is the athlete currently performing well?

  • What is the athlete's level of experience?

  • Has the athlete suffered any recent injuries?

  • What are the weather conditions like during the athlete's run?

  • What is the athlete's current mental state?

  • Is the athlete's equipment functioning properly and in top condition?

  • What does the course set up look like? (height, distance, difficulty of jumps etc.)

This list of influencing factors could go on, and it's clear that there are many elements to consider when predicting an athlete's performance. Some of these factors might be challenging to evaluate from an external viewpoint. In our initial approach to predictive analysis, we chose to concentrate on two primary aspects: The strength of the competing athletes and the current form of the individual athlete.

First Algorithm Explained

Our basic algorithm begins by examining all of the athlete’s previous results in the same discipline as the upcoming event. The key metrics we focus on for each of these past results include:

  • The date of the competition

  • The average global ranking points of the competing athletes

  • The athlete’s final rank in the competition

  • The total number of competitors in the event

After gathering data about the athlete’s past performances, our algorithm calculates the weighted average percentile finish for the athlete across these events. Let's break down what this actually means.

We use percentile finishes instead of ranks. Why? Because a 5th place in a competition with 10 riders (putting the athlete in the 50th percentile) is not the same as a 5th place out of 100 riders (which is the 5th percentile). This approach allows for a more accurate comparison across events of varying sizes. Additionally, we adjust the impact of each result on the overall average based on two key factors:

  1. Time Passed Since the Result: A recent performance is given more weight than one from 2 years ago. This helps us account for the athlete’s current form.

  2. Comparison of Competition Levels: We compare the competitiveness of past events with the current event. This is assessed using the average global ranking points of the competitors. Events with a similar level of competition, indicated by comparable average global ranking points, are given greater importance in our calculation.

After completing these calculations, we arrive at the weighted average percentile, which essentially serves as our prediction. To determine a predicted rank, we simply multiply this percentile by the number of riders participating in the event. This gives us a forecasted ranking position for our athlete in the upcoming competition.

This method is applied to every participant in the competition, leaving us with a predicted percentile for each one. However, when creating a complete ranking, we might encounter a situation where two predicted percentiles are extremely close, like 0.154 and 0.155. In a 20-rider competition, both these percentiles would round to the same position, the 3rd place. To address this, we sort the athletes by their predicted percentiles and assign ranks based on their order in this sorted list. This process might slightly adjust the predictions, but it's crucial. For instance, if rider A’s predicted percentile is just 0.01% higher than rider B’s, it indicates that our model considers rider A likely to outperform rider B, and therefore deserves a higher ranking.

Results

While all of this might sound complex at first, the mathematics behind it is actually quite straightforward, similar to what you'd encounter in high school. Surprisingly, despite the simplicity of this approach, the results have been quite impressive. Let’s take a look at some examples to get an idea of the prediction accuracy:

Big Air Chur 2023 - Men (Top 10 out of 55 riders)

Rider                                   Prediction      Real Result     Error %     
Kira Kimura 1 2 -1.82%
Valentino Guseli 2 7 -9.09%
Nicolas Laframboise 3 4 -1.82%
Leon Guetl 4 13 -16.36%
Yuto Miyamura 5 23 -32.73%
Hiroto Ogiwara 6 1 +9.09%
Taiga Hasegawa 7 33 -47.27%
Romain Allemand 8 36 -50.91%
Ryoma Kimata 9 8 1.82%
Hiroaki Kunitake 10 10 0.00%

In this table, we're examining the accuracy of our predictions for the top 10 finishers in the Big Air Chur 2023. The comparison between our predictions and their actual results reveals some discrepancies and we managed to accurately predict the rank of only one athlete within the top 10. However, it's important to note that in a field of 55 riders, our model successfully identified 6 of the top 10 finishers, which is a notable achievement. Another metric to highlight is the prediction accuracy of the top 3 finishers. The average deviation from the actual results for these positions is just 4.24% - considering the simplicity of our algorithm, this level of accuracy is surprising.

Big Air Chur 2023 - Women (Top 10 out of 19 riders)

Rider                                     Prediction      Real Result     Error %   
Reiwa Iwabuchi 1 2 -5.26%
Mia Brookes 2 3 -5.26%
Miyabi Onitsuka 3 6 -15.79%
Marilu Poluzzi 4 17 -68.42%
Emeraudue Maheux 5 16 -57.89%
Fanny Chiesa 6 11 -26.32%
Kokomo Murase 7 1 +31.58%
Jasmine Baird 8 7 -5.26%
Mari Fukada 9 5 +21.05%
Laurie Blouin 10 4 +31.58%

Moving on to the women's competition at the Big Air Chur, our predictive model shows a decrease in accuracy compared to the men's event. It's important to consider, though, that the women's field had fewer competitors. This smaller pool can result in a higher relative error, making the predictions appear less accurate than they might be in a larger field.

Despite this, we see some significant discrepancies in our predictions, which need to be acknowledged regardless of the number of participants. Still, there are positive aspects to highlight. Notably, our model successfully identified 8 of the top 10 finishers and 2 of the top 3 in the women’s competition.

 

Predicting with Machine Learning

While our initial algorithm has shown some promising results, it also has some serious limitations:

  • The model is not trainable

  • We cannot get any feedback about the statistical significance of the factors in play

  • It’s difficult to add additional features to the model

  • The calculations of the weight-multipliers are using somewhat arbitrary numbers

  • Predictions only look at the individual athlete’s past career instead of trying to identify common patterns across many athletes.

Acknowledging these constraints, we've turned to machine learning to enhance our predictions. Machine learning, in essence, involves algorithms learning from data to make predictions or decisions. It's similar to human learning, who improve their skills through experience. In our application, we train models with datasets of past snowboarding competition results, identifying patterns and relationships to predict future outcomes. Our approach uses supervised learning, where the training data includes specific outcomes (like an athlete's final rank) to teach the model how to predict these results for new, upcoming events. This shift to machine learning promises to address the shortcomings of our initial algorithm, offering a more dynamic and adaptable prediction tool.

 

Defining Features

One of the most important steps in the process of preparing machine learning data is the definition of dataset features. Imagine that the initial dataset looks something like this:

Competition Athlete Rank
Competition 1 Athlete 1 1
Competition 1 Athlete 2 2
Competition 1 Athlete 3 3
Competition 2 Athlete 4 1
Competition 2 Athlete 5 2
Competition 2 Athlete 6 3

Here, each row represents an athlete's performance in a competition. However, with only competition name, athlete, and rank as data points, it becomes challenging for our model to identify useful patterns for future predictions. This is where a solid understanding of the sport comes in useful. We enrich our dataset with additional, relevant features that might serve as indications for an athlete’s performance:

Competition Athlete Ranking Points Average Rank Last injury Rank
Competition 1 Athlete 1 1
Competition 1 Athlete 2 2
Competition 1 Athlete 3 3
Competition 2 Athlete 4 1
Competition 2 Athlete 5 2
Competition 2 Athlete 6 3

Our real prediction model includes many more features than the example above, most of which aim to provide insights into the athlete’s current form and key characteristics of the competition.

However, it's important to acknowledge that not every factor influencing a snowboarder’s performance can be captured by a feature. Unpredictable elements, like poor sleep before the competition or a sudden change in weather, can't always be quantified. Our goal is to maximize the value of the available data, understanding that some variables remain beyond our model’s scope.

Model choice

There are many different machine learning models to choose from, each different in how its internal algorithm works. Selecting the right model is crucial, as it significantly influences prediction accuracy. For our specific challenge, we've identified three predictive methods that seem suitable:

  1. Pointwise method: This approach evaluates each athlete independently, predicting their specific finishing position in a competition. It's an evolution of our initial method, enhanced by a trainable model.
  2. Pairwise method: This method compares two athletes at a time and decides which one is more likely to beat the other. This process is repeated across all participants to establish a complete ranking.
  3. Listwise method: This method assesses the entire field of athletes at once and predicts the entire ranking in a single step.

Currently, our focus is on the Pointwise method. We are experimenting with three different algorithms within this category: Linear Regression, Random Forest, and Extra Trees.

Results

To determine the most effective algorithm, we've been running tests with all four – Linear Regression (LR), Random Forest (RF), Extra Trees (XT), and our original non-machine-learning algorithm, Weighted Average (WA) – in parallel.

Our testing grounds have been the FIS World Cups in the 2023/2024 snowboard season. So far, no single model has emerged as the clear leader; the accuracy of each varies significantly from event to event.

We've been tweaking the models by including or excluding certain features and adjusting the volume of data in the training set. Each modification affects each algorithm differently.

As we continue to optimize and experiment, we’ll share some promising predictions that have emerged from our trials.

Men’s Big Air Beijing 2023 (Top 10 out of 43 competitors) - Model: Extra Trees

Athlete Predicted Real Result Error
Kira Kimura 1 3 4.65%
Ryoma Kimata 2 2 0.00%
Yiming Su 3 1 4.65%
Hiroto Ogiwara 4 10 13.95%
Hiroaki Kunitake 5 12 16.28%
Valentino Guseli 6 16 23.26%
Nicolas Laframboise 7 21 32.56%
Ian Matteoli 8 8 0.00%
Taiga Hasegawa 9 4 11.63%
Chaeun Lee 10 6 9.30%

 

Women’s Halfpipe Copper 2023 (Top 10 out of 22 competitors) - Model: Linear Regression

Athlete Predicted Real Result Error
Mitsuki Ono 1 2 4.55%
Gaon Choi 2 1 4.55%
Maddie Mastro 3 3 0.00%
Berenice Wicki 4 7 13.64%
Queralt Castellet 5 5 0.00%
Brooke Dhondt 6 6 0.00%
Bea Kim 7 4 13.64%
Emily Arthur 8 9 4.55%
Kinsley White 9 14 22.73%
Isabelle Loetscher 10 8 9.09%

 

Women’s Big Air Chur 2023 (Top 10 out of 18 competitors) - Model: Random Forest

Athlete Predicted Real Result Error
Kokomo Murase 1 1 0.00%
Mia Brookes 2 3 5.26%
Miyabi Onitsuka 3 6 15.79%
Laurie Blouin 5 4 5.26%
Reira Iwabuchi 6 2 21.05%
Mari Fukada 7 5 10.53%
Jasmine Baird 8 7 5.26%
Lea Jugovac 9 9 0.00%
Eveliina Taka 10 10 0.00%
Fanny Piantanida Chiesa 11 11 0.00%

 

Men’s Halfpipe Secret Garden 2023 (Top 10 out of 27 competitors) - Model: Linear Regression

Athlete Predicted Real Result Error
Scotty James 1 1 0.00%
Ruka Hirano 2 2 0.00%
Chaeun Lee 3 3 0.00%
Jan Scherrer 4 18 51.85%
Shuichiro Shigeno 5 5 0.00%
Chase Josey 6 4 7.41%
Kaishu Hirano 7 10 11.11%
Chase Blackwell 8 11 11.11%
Ayumu Hirano 9 9 0.00%
Liam Gill 10 13 11.11%

 

Next Steps

As we continue trying to predict the unpredictable world of action sports, our takeaways so far have been encouraging. We still have a lot of work ahead of us, consisting mostly of trial and error. It’s clear that the perfect prediction model does not exist, but going ahead our natural next steps will be to experiment with further algorithms and trying to find the sweet spot in terms of the training data we feed into our models.

We’re excited to see what’s ahead and how we can shape this new innovative addition to action sports into tangible products that can unlock new experiences for fans and athletes.

If you would like to leave any comments about this post, you’re welcome to drop a line to philipp@wyldata.com. Likewise, if you are interested in working with us, we are happy to discuss how we can help your project.