Rocket League Goal Detection


This project came from the idea of how cool it would be to have a led strip behind my monitors flash when my team made a goal in rocket league. From the surface, with a little bit of help from a tool called Cheat Engine this seemed like a fairly straight forward task. After trying many different methods of finding the memory address that keeps your team score this quickly proved to be much harder than I had originally hoped. Limitations with Python, Windows ASLR (, and anti cheat engines made this unfeasable for me. After being frustrated with how games don’t have built in API’s for such things I began to research a non intrusive method for obtaining data from Rocket League that could also be applied to other games as well.

Meet OpenCV.


Utilizing Python’s image library (PIL) and OpenCV we can extract information from the games display. Fortunately for us there are some characteristics of Rocket Leagues score board that make our job easier. First, our teams score will always be on the left side of the board. Second, we always know the score will have white lettering on some colored background. In theory all we have to do is capture the small area of our score and calculate the difference from the last frame to detect a goal. Sounds simple right?


Kill The Noise

Using PIL we can capture an area of our screen and convert it to a numpy array. I chose to store my x,y coordinates as variables in a config section.

    img = ImageGrab.grab(bbox=(x, y, x2, y2))
    frame = np.array(img)

If you notice the scoreboard is slightly transparent and there is a ghost effect on the numerical value. This can make checking from frame differences difficult. We can eliminate most of this noise by utilizing OpenCV threshholds and dialation. The process is we first convert our frame to greyscale, then binary black and white based on the pixels intensity. We then dialate our frame to get a more defined image.

grayA = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

thresh_frame1 = cv2.threshold(grayA, 170, 255, cv2.THRESH_BINARY)[1] 

thresh_frame1 = cv2.dilate(thresh_frame1, None, iterations = 2) 

Original scoreboard on the left, and what our program now sees on the right.



So now we filtered out the noise in our frames we need to detect a change in our score. To do this we can utilize a structure similarity index function. This will take our old frame and new frame as input and output a lambda value of 0 or 1 with 1 being a perfect match.

(score, diff) = compare_ssim(newFrame, oldFrame, full=True)

We can take this value and compare it to a constant threshold value. This is nice as it allows us to fine tune the sensitivity. If our ssim value is less than our threshold we then trigger our leds. This works fine while in a match but what about in between games or browsing the menu?


This part of the program could be vastly improved upon but this was a quick hack that seems to work 80% of the time. Since we implemented thresholds, our program will tune out any pixels that are below a certain intensity. We can also build in a cooldown feature so in between goals we minimize the possibility of a false positive.

Eventually it would be ideal to detect the color of the scoreboard before making any frame comparisons to verify we are actually in a active game.

Final thoughts

Computer vision is a great way to extrapolate data from video games but is difficult to make 100% reliable. This is only one method among many to gather data. As always thanks for reading!

Code can be found here:

comments powered by Disqus