Instal Steam
login
|
bahasa
简体中文 (Tionghoa Sederhana)
繁體中文 (Tionghoa Tradisional)
日本語 (Bahasa Jepang)
한국어 (Bahasa Korea)
ไทย (Bahasa Thai)
Български (Bahasa Bulgaria)
Čeština (Bahasa Ceko)
Dansk (Bahasa Denmark)
Deutsch (Bahasa Jerman)
English (Bahasa Inggris)
Español - España (Bahasa Spanyol - Spanyol)
Español - Latinoamérica (Bahasa Spanyol - Amerika Latin)
Ελληνικά (Bahasa Yunani)
Français (Bahasa Prancis)
Italiano (Bahasa Italia)
Magyar (Bahasa Hungaria)
Nederlands (Bahasa Belanda)
Norsk (Bahasa Norwegia)
Polski (Bahasa Polandia)
Português (Portugis - Portugal)
Português-Brasil (Bahasa Portugis-Brasil)
Română (Bahasa Rumania)
Русский (Bahasa Rusia)
Suomi (Bahasa Finlandia)
Svenska (Bahasa Swedia)
Türkçe (Bahasa Turki)
Tiếng Việt (Bahasa Vietnam)
Українська (Bahasa Ukraina)
Laporkan kesalahan penerjemahan
If you have a public screenshot with a cross in it, it could be a problem. Unless it's a religious icon.
But if your so-called Manji is reflected in a mirror, it's still a manji icon. It's problematic to make an automatic rule for, because a system may not be able to detect how an image was intended, and whether or not it is a swastica or a mirrored manji icon.
How to deal with that?
Don't know why this is so hard to grasp.
Valve cracked down on Nazi images or names back in 2018. You can actually see it mentioned in that letter from the Senate being spammed around the forums. However, current Valve moderation only act on reports, so if it isn't reported it won't be removed.
I would have been so much more interested in the report if they had actually told Steam about the objectionable content they were probably the second person to have ever seen and tracked how long it took Steam to remove it. Even if they did this by sending a million URLs to Valve at once and getting extremely skewed results that way, it would have been better than just announcing "we found a problem and here are some very out of touch suggestions that don't apply to any forum that's ever been successfully run"
Because an automatic system that scans for this image may not be able to detect a mirrored manji from a swastica, and would end-up reporting such content inappropriately.
How to devise an automatic system that does not make this mistake?
It may still end-up reporting images inappropriately, and through that error propagate the symbol regardless.
Also, having a line carved in stone by figuring out the exact dividing point between mirrored manji and swastikas would just result in bad actors staying just barely over that line. The reason we have human moderators making decisions at all is exactly that - someone needs to be able to say "no, that's ♥♥♥♥♥♥♥♥" when someone else tries to pretend they're not breaking the rules, and also to be able to see when someone isn't actually breaking the rules but a naive interpretation of the rules would say they are.
Yes, putting anything resembling nazi symbolism on a profile is against the rules. As it should be.
But a human has to look at it and go "yes that's a swastika" or "no, that's not". You can't have a computer deciding on its own because then it becomes a game of finding the most offensive object that the computer doesn't catch. A human can just say "no, you're breaking the rules no matter how much you pretend not to", whereas a computer is deterministic - it will give you the same answer every time for the same input, which means it needs to decide long ahead of time whether something breaks the rules.