PCG Article Uh oh, people are now using machine learning to cheat in Rocket League


Wouldn't be surprised if this isn't a real problem soon in every competitive game, bots with perfect timing, reflexes, aim, etc. They will even learn the best strategies from the human players, or rather, learn how to counteract the best strategies.

Fortunately, a lot of anti-cheat software for shooters already looks for things like inhuman reflexes, so the users may not be overly successful, or at least not for long.
 
That's pretty cool though. It would be cool if they could use machine learning to make better AI for games, though I think it's a lot easier to train a bot to do something perfect than to be a humanlike opponent or otherwise fun to play against.

I wonder if multiplayer games are just going to throw in Captchas to detect bots.
Seems like you could let them learn to play the game and then nerf them a little by slowing down how long it takes them to react and perhaps making them shoot less accurately. They would still behave "intelligently" but would have more human reflexes, aim, etc. I suppose you could even change how much damage they deal/take.

The AI would be more interesting, if not necessarily much more difficult. People would be upset if they lost to AI a lot more than they do already.
 
Seems like you could let them learn to play the game and then nerf them a little by slowing down how long it takes them to react and perhaps making them shoot less accurately. They would still behave "intelligently" but would have more human reflexes, aim, etc. I suppose you could even change how much damage they deal/take.

The AI would be more interesting, if not necessarily much more difficult. People would be upset if they lost to AI a lot more than they do already.

Machine learning bots don't really seem to behave very intelligently though. They mostly just manage to learn a single trick that works often enough that they can rely on it to win (a.k.a. they get stuck in a local optimum). It's really difficult to get complex behaviour out of a machine learning model. The article itself mentioned how they seemed to get worse at the kick-off after a while (because it wasn't necessary to stay good at it in order to win games).

See also this video I watched recently:

View: https://www.youtube.com/watch?v=2tamH76Tjvw

It shows how easy it is to overfit on a specific solution and how long it can take to adapt to a new environment.
 
  • Like
Reactions: Brian Boru
Machine learning bots don't really seem to behave very intelligently though. They mostly just manage to learn a single trick that works often enough that they can rely on it to win (a.k.a. they get stuck in a local optimum). It's really difficult to get complex behaviour out of a machine learning model. The article itself mentioned how they seemed to get worse at the kick-off after a while (because it wasn't necessary to stay good at it in order to win games).

See also this video I watched recently:

View: https://www.youtube.com/watch?v=2tamH76Tjvw

It shows how easy it is to overfit on a specific solution and how long it can take to adapt to a new environment.
If I understand the underlying issues here (and that's a big "if"), then I actually noticed this myself with the face-swapping software I mentioned in another post, I think that we're still in the very early stages in designing these things. Right now, our goal seems to be a completely hands off learning process for the bots, That's an understandable goal, but it leads to a bunch of problems or inefficiencies.
 
If I understand the underlying issues here (and that's a big "if"), then I actually noticed this myself with the face-swapping software I mentioned in another post, I think that we're still in the very early stages in designing these things. Right now, our goal seems to be a completely hands off learning process for the bots, That's an understandable goal, but it leads to a bunch of problems or inefficiencies.
I'm interested in your thoughts on something. I don't have a lot of knowledge on face swapping tech. But like when we try to do a picture, like you did of your wife as a baby, it seems like it takes a lot to accomplish that. You had to get several pictures of your wife and babies, and then it took a while to compile them into a picture. Yet you can get on Snapchat, or whatever it is, and the filters will almost immediately superimpose your face on moving video, even with mouth movements and facial expressions. How in the world can they do that so quickly when it seems like it's so hard to do things that seem simpler than that?
 
Any of you familiar with Martin Gardner? He did something on Machine Learning 50-60 years ago, but I don't remember what it was.

I do remember constructing a 'machine' from matchboxes ~50 years ago, which would learn how to perform some task optimally thru iteration. Again, I don't recall the task, it was probably some intriguing project in a magazine.

There were a lot of matchboxes involved, and small balls to put inside them. I'm totally guessing now, but I imagine each box represented a possible choice, and probably got a ball if it became part of a winning 'route'. Once optimized, you simply made choices for the 'machine' based on whether or not an available box contained a ball.

Machines have been performing better than humans at many tasks for a long time, which has made huge contributions to our economic and tech advancement. If machines can crack one of humanity's biggest flaws—that everyone has to learn all from scratch—the advances could be way off any scales we could imagine.

That said, it'll likely be a long time before machines git gud at the higher-level brain processes when there isn't complete data available—like behavioral pattern recognition, decision making, risk analysis, etc.
 
  • Like
Reactions: Pifanjr
(because it wasn't necessary to stay good at it in order to win games).
I was thinking about this some more and I'm not sure that's the reason the AI got worse at that part of the game. I think it didn't have proper end parameters and kept "learning" when it should have quit because it didn't know what the heck it was doing the kickoff for.

In the face-art program I used briefly, there came a point where the picture it created was as good as it was going to get, but the AI didn't understand that and kept going and the picture got worse and worse from there.


......
It shows how easy it is to overfit on a specific solution and how long it can take to adapt to a new environment.
It couldn't learn in actual matches. You would have to record human matches and let it simulate those over and over again. If that's even what the video was talking about. I'm saving watching it for bedtime.
 
  • Like
Reactions: Brian Boru

Zloth

Community Contributor
It shows how easy it is to overfit on a specific solution and how long it can take to adapt to a new environment.
Maybe not a bad thing.

What if an AI like this were put into a 4X game? One player can't provide nearly enough play-throughs for the AI to advance well but, if the game collects the play habits of several thousand players, you could get it going (I think). After chewing on the games played in the first month or so, the developers patch everyone's game with a new AI based on that data. They do the same the next month, and the next, and the next.

That should make it so the game will play differently every month. Instead of finding your sure-fire way to win, even at the hardest difficulty, you'll find your sure-fire way countered the next time the game gets patched. There will also be strategies that don't work well against the first AI, so people won't use them much, so hopefully the new AI won't work as well against them. It might eventually get too hard, but it shouldn't be hard for the developers to revert back to a previous AI and go through it again.
 
  • Like
Reactions: Brian Boru
I was thinking about this some more and I'm not sure that's the reason the AI got worse at that part of the game. I think it didn't have proper end parameters and kept "learning" when it should have quit because it didn't know what the heck it was doing the kickoff for.

In the face-art program I used briefly, there came a point where the picture it created was as good as it was going to get, but the AI didn't understand that and kept going and the picture got worse and worse from there.

Setting the right parameters is definitely one of the hard parts of machine learning. It's not easy to determine how big the rewards and penalties should be for each action or situation.

It couldn't learn in actual matches. You would have to record human matches and let it simulate those over and over again. If that's even what the video was talking about. I'm saving watching it for bedtime.

A recorded human car can't respond to actions taken by a bot though, so the bot doesn't actually learn much from that. I think bots are typically trained without opponents at first and then against other bot opponents. Only after that can you get data from matches against humans, but that'll go a lot slower and the variety of behaviours of humans is much larger.

Maybe not a bad thing.

What if an AI like this were put into a 4X game? One player can't provide nearly enough play-throughs for the AI to advance well but, if the game collects the play habits of several thousand players, you could get it going (I think). After chewing on the games played in the first month or so, the developers patch everyone's game with a new AI based on that data. They do the same the next month, and the next, and the next.

That should make it so the game will play differently every month. Instead of finding your sure-fire way to win, even at the hardest difficulty, you'll find your sure-fire way countered the next time the game gets patched. There will also be strategies that don't work well against the first AI, so people won't use them much, so hopefully the new AI won't work as well against them. It might eventually get too hard, but it shouldn't be hard for the developers to revert back to a previous AI and go through it again.

Any decent 4X is so complex it becomes really hard for a machine learning AI to learn how to properly play without significant guidance. Even then it'll probably just find one particular strategy that works well enough that it can ignore most of the complexity of the game.

What you might be able to do is train multiple AIs that each focus on a different part of the game, so you get different "personalities". But it'll probably never get close to how actual humans play (at least not before some big advances in machine learning technology).
 
What you might be able to do is train multiple AIs that each focus on a different part of the game, so you get different "personalities". But it'll probably never get close to how actual humans play (at least not before some big advances in machine learning technology).
What if you programmed a bunch of different strategy types into the AI and the machine learning was only to react to what its opponents do to get it back on track with whichever high level strategy was randomly selected for that match?

As you said, though, training is going to be hard to do.
 
Nov 13, 2022
6
2
15
Visit site
I'm not surprised at all
my friend does that all the time (I mean cheats in games)

I'm not surprised at all
my friend does that all the time (I mean cheats in games)
Yes, it's true that some people are using machine learning to cheat in Rocket League. This involves using artificial intelligence algorithms to gain an unfair advantage in the game by predicting the movements of the ball and other players. Some cheaters use machine learning models to automate certain actions, such as aiming and shooting, giving them an edge over their opponents.
 

TRENDING THREADS