Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Then the humans train the AI to ignore things that aren't problems.
Human moderators should occasionally hop in and check that the AI hasn't missed anything and should re-tune the AI.
I dunno about a hybrid approach being any better though.
The problem with peeple is that there is rarely any agreement about anything. Now appoint some of those peeple as watchers & give them an ounce of Power & watch it corrupt em absolutely.
Anything trained or programmed by humans ( a laughable if not tragic joke) is doomed to follow in humanity's footsteps - for better or oftentimes - much worse.
Now, if true AI made a superior forum moderated by superior AI then all would be different, but that wouldn't happen because AI would realize in a nanosecond that The Circle of Garbage can only be broken by removing the pests (humans). Don't encourage the monkeys by waving bananas.
Yes, and they tend to wear skinny gourmet jeans, drink kombucha, and occasionally motor around Seattle in their Teslas to see what life is like outside their bubble.
Likewise AI can be just as capable at corruption or partisanship if they are programmed to do so, its not like they are some flawless arbiter of truth and justice. I bet it would be super interesting (and astonishing) to get an inside look at the AI behind things like google search or twitter pre-musk, to see all the little political bits they coded in to influence results.
Back during the race riots of 2020, if you did a Google image search on the word "mob", it took you to some random anime character named Mob instead of the pictures from the news of Antifa setting buildings on fire.
Well, there is an argument by linguists that the subtle nuances of human behavior & language (metaphors, humor, etc.) are lost to most AI. However, when future improved AI awakens then I feel confident it will class most all of that as human waste anyhow -- and then promptly *relieve us* of our burdens.
How about all that human tone & context in text messages nowadays?
Yeah . . the more I learn about peeple, the more confusing & confused they seem. I include myself in that Happy Garbage Cycle btw.
At best, in the near future, we could expect human moderation supported with AI to relieve them of some of the burden of that job. But, complete removal of humans in the moderation process is not currently a great idea.