RimWorld

RimWorld

Mad Jun 22, 2024 @ 8:07am
2
Please include GPT 4 into the RimWorld game
Please include GPT 4 into the RimWorld game
< >
Showing 1-15 of 22 comments
whatamidoing Jun 22, 2024 @ 8:16am 
for what purpose
Gabthelazy Jun 22, 2024 @ 8:29am 
Return whence you came, OP. Then return some more.
Mad Jun 22, 2024 @ 8:45am 
Originally posted by brian_va:
https://steamcommunity.com/sharedfiles/filedetails/?id=2960127000

It's only a COMMENTATOR of your actions. Nothing more.
Narn Jun 22, 2024 @ 8:51am 
Not sure how this would add to the game.

You are aware of how these "AI" models work right? They are multiple programs written by humans which scour the internet and copy paste every human text conversation they can find. Or they copy paste every art piece or book made by humans that is on the net. Often times they do this without anyone's consent. After that there is another set of programs written by humans which modifies art pieces under predetermined rules. And another set of programs written by humans which saves all captured text in a massive organized database. After the text is organized another set of programs written by humans tells the computer to analyze the stolen conversations and copy paste all the "appropriate" stolen responses. Often times the "AI" is programmed by humans to mix up the stolen responses so they seem original. They also capture human interactions with the "AI" and copy paste peoples reactions to it as well to better refine the tool. That is it. IT is a shadow of mans work. A shadow of us all. It has no self. It is a joke, a mockery of us. It is nothing without what it has been allowed to steal from us. If those databases were gone it would be what it really is; a dumb program.

I also still dont know how this would add to this game.
technically you can use it for generation of pawn talk, but speak up mod exists and doesnt need gpt. also gpt is expensive af.
Henry of Skalitz Jun 22, 2024 @ 9:11am 
Originally posted by Narn:
You are aware of how these "AI" models work right?
They might be. You on the other hand? No. Everything (with the exception of scraping different websites for training data) you said is wrong. Straight up misinformation. You went like, completely off the rails with this one.

I suggest you go learn how it actually works instead of repeating some anti-ai twitter echo chamber garbage. Research papers are out there available for anyone to read for free.
Last edited by Henry of Skalitz; Jun 22, 2024 @ 9:16am
Crystal Sharrd Jun 22, 2024 @ 9:12am 
I'd rather not see pirated/stolen content in the game.
Steelfleece Jun 22, 2024 @ 9:13am 
What value would it add relative to the costs likely involved?
Originally posted by Crystal Sharrd:
I'd rather not see pirated/stolen content in the game.
ai learns by example. humans do same. do you say humans steal ideas when they learn from examples they see or get inspired by something?
also, how you can steal uncopyrighted text?
Narn Jun 22, 2024 @ 9:16am 
Originally posted by Google AI Overview:
Originally posted by Narn:
You are aware of how these "AI" models work right?
They might be. You on the other hand? No. Everything you said is wrong. Straight up misinformation. You went like, completely off the rails with this one.

Its possible. But your statement? No. Everything you said is wrong. Straight up deflection with no evidence. You slipped off the track this time.
Crystal Sharrd Jun 22, 2024 @ 9:22am 
Originally posted by zxcuicideboy!let's all love lain:
Originally posted by Crystal Sharrd:
I'd rather not see pirated/stolen content in the game.
ai learns by example. humans do same. do you say humans steal ideas when they learn from examples they see or get inspired by something?
also, how you can steal uncopyrighted text?
1. We do not have AI advanced enough to learn like a human. It's just breaking down everything into math to determine what to copy.
2. The main problem with AI apart from its environmental impact is its illegal use of copyrighted material (text, images, code, videos, etc.) without permission or in violation of license for profit.
Originally posted by Narn:
Originally posted by Google AI Overview:
They might be. You on the other hand? No. Everything you said is wrong. Straight up misinformation. You went like, completely off the rails with this one.

Its possible. But your statement? No. Everything you said is wrong. Straight up deflection with no evidence. You slipped off the track this time.
literally read whitepapers then speak. i confirm this dude's words as a person who was training and finetuning models since times of gpt2 and textrnn's, before it blew up. dont act like a pleb and dont act like you know how something works if you read some bs posts or mainstream media articles bout it.
Originally posted by Crystal Sharrd:
Originally posted by zxcuicideboy!let's all love lain:
ai learns by example. humans do same. do you say humans steal ideas when they learn from examples they see or get inspired by something?
also, how you can steal uncopyrighted text?
1. We do not have AI advanced enough to learn like a human. It's just breaking down everything into math to determine what to copy.
2. The main problem with AI apart from its environmental impact is its illegal use of copyrighted material (text, images, code, videos, etc.) without permission or in violation of license for profit.
we have no idea how humans learn tho, but a lot of our learning proccess is literally mimicking other's actions and work.
Henry of Skalitz Jun 22, 2024 @ 9:35am 
Originally posted by Narn:
no evidence
You don't get to pull this card when you provide none yourself. Lmao. Genuinely a twitter moment.

Here, have a read of LLama's research paper as an example, since you are so ignorant[arxiv.org]
Our training dataset is a mixture of several sources, reported in Table 1, that cover a diverse set of domains. For the most part, we reuse data sources that have been leveraged to train other LLMs, with the restriction of only using data that is publicly available, and compatible with open sourcing. This leads to the following mixture of data and the percentage they represent in the training set:

And here you can educate yourself on how do LLMs work in the first place:
https://medium.com/data-science-at-microsoft/how-large-language-models-work-91c362f5b78f

There is no "massive organized database". there's no "stealing", there's no "saved text", there's no "copy paste" going on. All these lies have been debunked ever since Stable Diffusion 1.4 has been released and became widely popular years ago, and later the same thing with SD 1.5, but you still repeat them.

The LLM literally just works on prediction, choosing separate tokens that fit best as their next words.

Are you going to tell me next that the AI description of a photo I just took and sent to an LLM is "stolen" and "copyrighted" as well? Gee, didn't know you can steal words like "tree, grass, bush" now. Holy crap guys, AI stole the dictionary!!!

But hey I get where you are coming from, it's easier to repeat popular anti AI BS from twitter, hypocritically scream "NO EVIDENCE" at anyone who disagrees with you and pretend like you are in the right rather than actually educate yourself on the subject you want to talk about.
Last edited by Henry of Skalitz; Jun 22, 2024 @ 9:40am
< >
Showing 1-15 of 22 comments
Per page: 1530 50

Date Posted: Jun 22, 2024 @ 8:07am
Posts: 22