AI Model to Improve Sports Storytelling

The New York Times and wrnch, an AI-computer vision company and member of the NVIDIA Inception program, developed an AI Model to Improve Sports Storytelling.  They have developed a 3-D model that can provide more information from an event that a human eye can miss. This would eventually help journalists to accumulate greater information.

“Traditional motion capture techniques require an athlete to wear physical markers. But this isn’t possible during live sporting events. Instead, we built a solution that uses our photographers’ cameras, machine learning, and computer vision to capture this data as an event unfolds,” the researchers stated in their article. You can view their article through this link.

This research has thus far focussed on gymnastics, which is one of the popular Olympic sports.  Training the model is always the most important part of AI which determines how accurate would the output be.  Researchers developed their model by conducting field tests at practice sessions of the Rutgers University women’s gymnastics team. Greater the data to train, better would be the model.  They fine-tuned their models with multiple videos and photos of the athletes at work. The team used multiple NVIDIA GPUs on the cloud and TensorRT for training and inference. TensorRT is Nvidia’s inference accelerator.

Sources: Nvidia

Using the same camera-calibration method, the magenta lines on the tennis court above are drawn based on estimated camera position and orientation. Source: The New York Times R&D group.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.