安裝 Steam
登入
|
語言
簡體中文
日本語(日文)
한국어(韓文)
ไทย(泰文)
Български(保加利亞文)
Čeština(捷克文)
Dansk(丹麥文)
Deutsch(德文)
English(英文)
Español - España(西班牙文 - 西班牙)
Español - Latinoamérica(西班牙文 - 拉丁美洲)
Ελληνικά(希臘文)
Français(法文)
Italiano(義大利文)
Bahasa Indonesia(印尼語)
Magyar(匈牙利文)
Nederlands(荷蘭文)
Norsk(挪威文)
Polski(波蘭文)
Português(葡萄牙文 - 葡萄牙)
Português - Brasil(葡萄牙文 - 巴西)
Română(羅馬尼亞文)
Русский(俄文)
Suomi(芬蘭文)
Svenska(瑞典文)
Türkçe(土耳其文)
tiếng Việt(越南文)
Українська(烏克蘭文)
回報翻譯問題
I'm making the point that is a common problem when you try to restrict or blokc bad actors - you get collateral damage. I thought that was obvious.
The fact is you CANNOT know unless you know their systems. It's impossible.
You need to prove they are responses from bots or you are just looking to get rid of people you don't agree with.
It looks like someone else is already pointing that out. Just pointing out they aren't the only person who is noticing it.
There's also a saying that I'm fond of in the medical profession that relates to Occam's Razor and applies here.
"If you hear hooves, think horses not zebras".
The implication meaning if you have a piece of evidence or a symptom, you should always defer to the most banal and obvious cause rather than the exotic.
This isn't actually true. The people that produced the paper saying this took the outputs of an LLM and fed that into a new LLM and then took the outputs of that and fed it into another new LLM and eventually said 'hey, we trained an LLM purely on low quality synthetic garbage and got garbage'.
That's like putting two people in a locked room and coming back in 5 generations time and being shocked that you've reinvented the Habsburgs.
Companies are spending billions of dollars on generating synthetic training data, its not like they forgot to read the paper before doing so.
You're talking about something entirely different.
What I'm referring to is how ALL AI works right now.
At the point of creation or release, Ai has the pool of the internet to train on, right? So at that point in time, 100% of it is obviously entirely created by humanity, yes? It must be as AI didn't create before this.
So now it gets to work. People ask it things, and it creates things. That is now on the internet. So now the NEXT AI request uses the internet INCLUDING any previous AI created requests.
That's what diluting is!
So as time goes on and more AI results are created, it gets ever worse. This is really, really simple and obvious.
LLM's aren't just giant vaccums that suck in random internet content without regard for its source. The information being used to train them is getting more curated over time, not less.
I don't think that's a battle we're going to win unfortunately, I just say DRL these days because its faster to explain what I mean by DRL than say 'AI.. no not that one."
Oct 13, 2024 -
The Best Game
https://steamcommunity.com/app/570/reviews/?p=1&browsefilter=mostrecent#scrollTop=53672
https://steamcommunity.com/groups/UKCS/members?searchKey=%F0%9D%92%9C%F0%9D%93%81%F0%9D%92%BE%F0%9D%93%83%F0%9D%93%80%F0%9D%92%B6
Valve has no shortage of illegal fake reviews flooded by The Best Game - look at store page graph for the days of unusual reviews activity spam that count towards the over all review score.
Illegal fake reviews?
Can you cite the law being broken or is it merely your own defintion of illegal based on criteria you?
Oh now what is Valve to do? I'ts not like they are removing the 1000's of blatant ones weekly now on their own accord.
Tell them FEDs Valve - The Best Game - just 1000's of deceptive review bot AI scripted spam on record counting towards overall review score of your product, weekly. Valve did no wrong.
Would those reviews YOU deem illegal fall under that? Doubtful.