Rather than counter each new technical exploit as they surface, it suggests tracking gaming behavior and using statistical means to compare to. For example how long someone tracks moving objects (players) that are obscured behind opaque surfaces (ie wallhacking).
Defcon has the simplest anticheat: if the checksum of the client doesn't match the server then the server overrides the client.
I think you have to read the whole document to get a clear picture of the process. I personally think it would work quite well with good algorithms modelling typical non-cheating behaviour. After all, isn't that what we do here with reviewing demos to use our experience and knowledge of the game to see if something looks suspect? These parameters could be defined for this game and tested. At first there might be some false positives, but all this can be refined when examining what legitimate action(s) caused that. As the author mentioned, this type of logic is already used in programs to detect attempts to penetrate web sites for malicious use.It would be a joy to see some long time cheaters get booted from our servers - as well as people who think they are technically smarter and are bent on cheating for it's own sake (eg Razor).
Quote from: ReCycled on January 05, 2011, 04:58:37 PMI think you have to read the whole document to get a clear picture of the process. I personally think it would work quite well with good algorithms modelling typical non-cheating behaviour. After all, isn't that what we do here with reviewing demos to use our experience and knowledge of the game to see if something looks suspect? These parameters could be defined for this game and tested. At first there might be some false positives, but all this can be refined when examining what legitimate action(s) caused that. As the author mentioned, this type of logic is already used in programs to detect attempts to penetrate web sites for malicious use.It would be a joy to see some long time cheaters get booted from our servers - as well as people who think they are technically smarter and are bent on cheating for it's own sake (eg Razor). I don't know if it's ever that easy.One thing I have been working on for work which seemed simple at first turned out to be this:http://en.wikipedia.org/wiki/Bin_packing_problemAlthough this paper is most likely a giant leap forward solving similar problems. Realistically, how do you go from having people review something, to telling a computer what to analyze? The word behavior, in and of itself, is misleading. Behaviors to us are a human concept. If someone is constantly lying then we as humans can label him as a liar, but assuming a computer can magically detect how often someone tells a falsehood, what frequency makes him a liar? What about severity? What if that person is an actor and "lies" for a living, or an author.How I see this being used, if implemented, is a way of observing trends and simply flagging them for review. Maybe taking these markers, after 3 separate days of offenses, automatically taking some demos for review. Now THAT would be awesome.
if you classify 80% rail as an aimbot, guess what... some people really do rail that good
How I see this being used, if implemented, is a way of observing trends and simply flagging them for review. Maybe taking these markers, after 3 separate days of offenses, automatically taking some demos for review. Now THAT would be awesome.
don't know if its possible put like a force record on people so it records on their end and somehow have the demo auto sent to somewhere where admin can view it