Real-Time Object Detection and Classification in Sports Using AI and Computer Vision

28 Sep 2023 Roman Babchenko

Imagine being able to quickly recognize the positions of players and referees, grouped by their respective teams, while watching a live soccer, popularly also known as football in various other parts of the world. This is no longer just a pipe dream, but rather a reality, thanks to developments in AI and computer vision technologies. With a particular emphasis on a soccer match as an example, we will dig into the intriguing world of real-time object recognition and classification using AI in this blog article. We’ll examine the problem-solving strategy, hardware requirements, technological stack, and the sequential processing pipeline underlying this ground-breaking system. Additionally, we’ll talk about potential upgrades that might make this system even more precise and effective.

Task

Imagine a thrilling soccer game being streamed in real-time at a frame rate of 60 frames per second. Imagine a camera sitting in the grandstand, keeping a close eye on the enormous soccer field. What if we told you that the secret to a fascinating project lies in this raw, unedited video input?

The objective is nothing short of amazing: to convert this engrossing video into a continuous stream of data, painstakingly calculating the coordinates of officials and players, all of whom are tastefully arranged according to the teams they represent. We’re about to delve into how this technological magic works, which is like turning the chaos of a live soccer game into a carefully organized symphony of data.

Solution Method

To enable real-time object detection, the project uses the well-known YOLO (You Only Look Once) AI paradigm. Using YOLO, items in the video stream can be quickly and correctly identified and categorized. The project integrates YOLO with crucial computer vision technologies like OpenCV and makes use of CUDA technology to fully realize its potential and guarantee effective processing. The success of the project is based on this fusion of cutting-edge technologies.

The solution is methodically broken down into two main phases, each of which is essential to the process:

  1. Person Object Recognition: The YOLO algorithm is implemented in this initial stage, along with a pre-trained model. Its job is to carefully examine the frames and locate people inside of them. This indicates that every player and referee seen in the video feed can be distinguished with astounding precision.
  2. Identifying Team Affiliation 2. The project moves forward when the artifacts, and mainly the people, have been identified. It makes use of powerful computer vision techniques to examine each object’s color properties. The system groups these items into the appropriate teams by carefully examining the colors contained in the frames. Based on the colors they represent; this step is crucial for differentiating between players and referees.

It’s critical to note that all these intricate procedures take place in real-time in order to highlight this project’s outstanding accomplishment. The technology keeps up with the video frame rate to make sure that the analysis keeps up with the real-time soccer game. The project’s use of reliable hardware resources and the addition of specialized libraries enable this.

The project is essentially a demonstration of how cutting-edge AI, computer vision, and hardware capabilities have converged. It demonstrates how these technologies may combine to offer real-time insights and analysis in the chaotic setting of a sporting event, where prompt and precise object recognition is crucial.

Hardware Specification

The project ran on a standard desktop PC with the following hardware:

  • CPU: Intel(R) Core (TM) i7-3930K CPU @ 3.20GHz
  • GPU: NVIDIA RTX3080Ti 12Gb
  • Motherboard: X79-DELUXE
  • RAM: 24Gb DDR3
  • SSD: Samsung 860 EVO 1TB

The system operated on Arch Linux x86_64 with a Kernel version of 6.5.2-arch1-1, using CUDA technology for accelerated processing.

Technology Stack

The project was implemented in Python, leveraging the following technologies and libraries:

  • Ultralytics YOLOv8: A state-of-the-art model for object detection.
  • OpenCV: An open-source computer vision library for image manipulation and analysis.
  • CUDA: A parallel computing platform that accelerates AI model predictions and real-time image processing.

The source code for this project is available at GitHub.

Processing Pipeline

The answer manifests as a thorough processing pipeline with the crucial steps listed below:

  1. Input: The video stream is prepared, often sourced from high-quality recordings of matches.
  2. Video Capture (Frame Extraction): Frames are extracted from the video and passed to the AI model for object recognition.
  3. Object Detection from Frame: YOLO processes each frame, producing a dataset of recognized objects.
  4. Object Cropping: Recognized objects are isolated from the frame, focusing on people.
  5. Color Processing: Employing parallel processing optimizes object handling, particularly in identifying dominant colors while filtering out irrelevant ones, like the soccer field’s green. The colors.py module within the main script contains the  detect_color(img, field_color) function, which is called individually for each object. The function returns only the player’s color as output because we filter out the green color by applying the corresponding HSV filter defined in the main module detect.py.
  6. Result Dataset Grouping: In this phase, objects are grouped by their colors, each with a customized filter. We adapt these filters for specific video streams, accommodating potential color distortions and variations. The colors.py module contains the  get_ranged_groups(data, groups) function, responsible for sorting player colors into groups. It also utilizes the  is_color_in_range(..) function from detect.py to determine if a color falls within the predefined filter range.  We input all the colors of the objects and, as output, obtain groups that we use for object tagging.
  7.  Output: Next, the results from the obtained dataset are displayed on the screen, either in text mode to a separate thread or file. All the necessary parameters are available to draw rectangles on the original frame according to team colors and label them. In the main thread, the processed frame with all the annotations is displayed on the screen, while textual data is output to STDOUT stream.

And here is a video demonstrating the results of our research:

Conclusion

In the future, we can use special datasets to teach custom models. These models can be made even better by adding things like field markings to make sure we know exactly where players are. When team colors look too much like the field, we can use more advanced color techniques to figure things out.

This process isn’t just for sports; it can be useful in other areas too. For example, in healthcare, it can help doctors find medical problems quickly by looking at images in real-time. It can also make things safer in security by recognizing objects right away. It might help self-driving cars handle tricky terrain or protect endangered animals. The possibilities are endless. So, as you enjoy all this cool technology, think about how it could change the world in the future.

Keep an eye out for a future where computers and smart thinking keep changing the way we see and understand things.

Ready to start the conversation?