Quote:
Originally Posted by Entreri
1. Set up automated methods to catch bots. Make them very specific so false positives are very unlikely.
|
You're totally wrong: specific checks would be circumvented extremely easily. Detecting a bot is all about global checks, or a subtle balance of "specific checks". A lot of legitimate players would not pass the "specific checks" but would pass global checks.
Quote:
2. Ban players suspected to be bots via automated methods.
|
Impossible for legal reasons I believe.
Quote:
Here's my question to you. Tell me why you think A-net isn't taking these exact three steps I just mentioned. Also, tell me why you think one ban costs more then the profit made from selling a copy of GW. That's the real topic at hand.
|
Here is the answer: because it takes time, thus money. Log management applications require expert knowledge (not a degree but it's quite complex), in two ways: understanding the log messages (the technical bit) and interpreting them (how do you differentiate a bot from a human? plus the legal aspects of the question, if you need to "prove" to a judge your case of banning the player).
Bots improve all the time, every time you create some "successful checks", they'll adapt. So your detection system needs to evolve all the time, and you need a team of people understanding this. And sometimes you even need to implement new in-game features, that's what happened with the /report feature, plus a few other updates. All this is very costly, as it can't unfortunately be processed in a manner as simple as the ones you suggest. There's no "problem solved" in this case, the problem is ongoing and permanent. You may gain money on banning scammers or dupers, but you won't in the case of bots/RMTs.
Once again, as said before, if you knew what a log look like and what log management entails, you wouldn't think it's that simple/fast. It is not, it requires less knowledge than programming the game but much more than playing it (bots will try unexpected sequences of messages, so you need to go through the messages sequence by hand trying to understand what does not make sense, what message couldn't have been spent by a legitimate client or which one correspond to automated actions as opposed to human action, possibly looking at the use of the keyboard and the mouse, which is A LOT of data). And most importantly it requires more time than you think, I'd say that the basic case should not take more than 1h, while the average would take overall 2 to 3h of shared employee time, possibly going to 10h for the most difficult ones (which, in general, will make the team think about new ways to catch the "bad guys").
You cannot possibly claim that it's cheaper for Anet to ban than not to ban, because you'd have to have pretty good "estimates" (I really mean numbers, not fuzzy statements and words) and be able to back your "reasoning". All the people that work in computing, and in particular on the server side, know that logging is a big problem in terms of checking it. There are automated tools of course (and no doubt that Anet has some pretty good ones, which means that cases can be solved in a matter of hours), but still it's not that simple. By banning an account, the company is exposed to legal consequences (no, contrarily to popular belief, they can't do what they want) and the cost of this is huge, so it's only logical that they fix the problem at the level of log management.
Quote:
Including the one by Gaile initially quoted in the first post. I suspect a Community Relations Manager hasn't banned bots before either.
|
You're right on one point here: in the end, we can't really discuss this because neither you nor me have the numbers. It's a matter of trust. And who should we trust: Anet's CR (whose job is on the line if something bad happens) or you (totally unknown guy, you could even be leading an RMT/gold-selling company!)?