Instalar Steam
iniciar sesión
|
idioma
简体中文 (Chino simplificado)
繁體中文 (Chino tradicional)
日本語 (Japonés)
한국어 (Coreano)
ไทย (Tailandés)
български (Búlgaro)
Čeština (Checo)
Dansk (Danés)
Deutsch (Alemán)
English (Inglés)
Español - España
Ελληνικά (Griego)
Français (Francés)
Italiano
Bahasa Indonesia (indonesio)
Magyar (Húngaro)
Nederlands (Holandés)
Norsk (Noruego)
Polski (Polaco)
Português (Portugués de Portugal)
Português - Brasil (Portugués - Brasil)
Română (Rumano)
Русский (Ruso)
Suomi (Finés)
Svenska (Sueco)
Türkçe (Turco)
Tiếng Việt (Vietnamita)
Українська (Ucraniano)
Informar de un error de traducción
This is like trying to drive a nail with busted up schoolbus. Where ya gonna park it when you're done?
There really isn't enough data there for ChatGPT to create any narrative that was worth reading. And, if it was using those things as prompts, most of it would make no sense as you were there, playing the game, when those "supposed" events occurred. And, trying to shoehorn such a feature into the game, even if it was legally possible, would be more effort than any possible beneficial return.
Plenty of games do have some basic phrasology-stuffs to create psuedo-narrative "Histories." Dwarf Fortress is famous for it, but it's got a lot of data to use to come up with "looks like" narrative stuffs. What games like Rimworld generally have, like timelines and the like, maybe some important events, births, deaths, etc... those are 'enough." It's only when the scale gets bigger that some big cohesive narrative might help pull things together so the player gets "The Big Picture." And, those devs who've done it have tweaked such a thing to be an entertaining asset, not allowing it to run wild with toenail clippings and left-over burrito reports.
Rimworld's story is told right in front of the player's face as they interact with it. At Rimworld's scale, there's not much going on that is hidden from the player.
To be honest, it's the player that's supposed to be drawing these events together to create their own story. IF that was damaged, which is a certainty, then what? It'd actually be bad for the gameplay experience and would not only not bring anything positive to it. Iif attempts were made to justify the effort by allowing it to fill-in-the-blanks... it'd damage the gameplay.
If we're speaking of using Huggingface, which is less advanced than NovelAi models, then coherence will be an even greater issue since even the most advanced AI cannot understand a lot of things well and also can't create decent stories on its own. Note that I didn't include ChatGPT as OpenAI has draconic censorship laws which will make it tremendously difficult for AI to work with RimWorld, as a lot of questionable things happen here that will cause the AI to refuse to generate what happened. Not by its own volition, but because the programmers went out of their way to censor it's outputs as much as possible.
Just saw this, I've seen work of NOVEL-AI and whilst Novel-AI outperforms most models, one model that does currently perform better over at huggingface right now coherency wise is gpt-neox-20b. If we throw out all time limitations that would be one of the best right now, but most people dont have a setup with atleast 42 gigs.
It can be further scratched down with more monsterous rigs time wise but that's besides the point, I don't know how much Novel-AI uses compared to lower models though, so I am curious on its usage.
But also to be fair, Novel-AI excels in a lot of specific areas and NEO has its own pros and cons as well I've noticed.
https://youtu.be/Jdt6cf9Mei8 for my case its okay, I deffo need to upgrade my hardware though.
ChatGPT's notoriety has given rise to a LOT of "use ChatGPT in video games" suggestions on the Steam forums. They range from having ChatGPT write narratives, create NPC dialogues, and simpler things, like random descriptions of "art" all the way to writing code and pressing magic buttons to create games.
The technology, maths... is pretty cool. But, the "magic" there is of human-interpreted origins. We interpret what we see because maths are applied to words and words are brain-think meme carriers. We infer brain-think when there is no wetware brain-think going on, surpassing the intended model of The Turing Test at home on our own computers.
And, in a group of humans who are routinely enmeshed in a computer environment while successfully managing to be self-stimulated via an electronic game, this is their Holy Grail - A thing that knows, computer-stuffs-wise, what the gamer wants before they know they want it.
Press "Play" - receive even moar betterer reward
But... it doesn't work like that.
D.A.N....
Earlier this morning, I was prompted to think on this thread and ChatGPT operating to construct a narrative based on a player's Rimworld progression/timeline and the significant events therein... And, while doing so, i was a bit taken aback by the war-crimes committed by certain enthusiastic players and the use of certain Shall Not Be Named mods.
If history has shown us anything, it's that no matter how wondrous a created work may be, there's always going to be someone who's going to want to draw a phallus on it...
Yes - ChatGPT has rules that are intended to filter output so that it can't, probably..., be used "for evil." For some Rimworlders, that's a no-go and ChatGPT would refuse to return the narrative construction their in-game actions would create.
One of the things people get sidetracked about is that these AI models only appear to be doing something original and novel. With ChatGPT and, i assume, other models, words chosen for a response are probabilistically determined using examples of what human writers have written.
It's an extremely robust syntax analysis tool applicable to things that can be constructed by using trained rule-sets and suitable to categorical analyses amiable to and produced with predictable, probalistcally governed, returns... So, it can construct sentences using "most likely next word" it has been taught using large sets of data. Computer code also has its own rules of syntax. Working code is presented with "syntax" that is acceptable and plenty of coders are trying to use ChatGPT like stackoverflow...
Two things keep ChatGPT from being taught "wrongly." One is the huge amount of data it has analyzed being a set of data we have already approved as being "kind of OK for human consumption" (pre-filtered by being accessible on the 'net in the first place) and the other are human tutors, rewarding or punishing it for the results it has produced while learning how to produce those results.
At least, inasmuch as I understand the application while enjoying my mug of coffee this morning...
(I'm not a coder, but am somewhat familiar with various AI schemes and some of the practically philosophical ... issues surrounding the subject. I have a request to fulfill regarding ChatGPT et al and have been puzzling over it for the past few days. :))
Don't worry wasn't referring to you
This is sort of how ChatGPT "works."
First: A prompt is necessary to guide it's work and its results. (And, yes, the apostrophe placement their is purposeful...So was that. <- And this.)
Think like ChatGPT...
Every word is like a sphere in a 3D space. That sphere is like a heat-map, with the closest approximation to center being the "word" in question. Other words that it has been taught that are similar to that word are close to it in 3D space, each with their own sphere of relevance, each close to other words that are similar. A general subject, therefore, represents a big heatmap comprised of spheres of words that have close relationships determined by their probabilities of appearing in text that also has close relationship to them. (Note: It also uses a neural net with dynamically assigned values, but I'm still trying to work that into a simple description 'cause I got groking yet to do.)
ChatGPT "thinks" in word-after-word terms, guided by the prompt weighted against or with the probability based on that prompt of the next word appearing in the sequence. (In it's returned, narrative, results.)
Due to how ChatGPT has learned "words" and the phrases that represent rules/syntax, it develops its model, in this example, of spheres of heatmaps in a 3D space.
ChatGPT is not capable of... "new" in any sense other than guided construction of pobabalistically determined results. ("New" is not its intention. That's kind of antithetical in terms of its construction and purpose. In fact, anything "new" in the results may be "new" due to errors... "New" can be useful, as poetry and music and other art directly represents, often produced by error'd human beings. :))
It will have issues with certain things we take for granted in a logical sort of way, but that are very confusing "literally."
"...the bird flipped..."
"... flipped the bird..."
These are two separate things to a human brain. IMO, ChatGPT would have problems with this sort of thing except for the fact that these are phrases commonly used in certain contexts, which it should have been taught already and trained to use *appropriately. (probabilistically) (I haven't done any testing. I'm just masticating the idea atm... :))
So, if "...flipped The Bird..." appeared, we might assume some actor made a rude gesture due to the formalized capitalization of "The Bird." ChatGPT should have been "taught" that as well, therefore "The Bird" may have a representation in 3D space of a heatmap that intersects what? Rude gestures, right? Insult, rude, offensive and the like may have spheres of association with other words that are similar in "use" in how sentences are constructed. That "similarity" is determined probabilistically in order of appearance.
"the bird" though, only rarely, probabilistically speaking, shares a word/meaning space with "gesture" or "insult" and much more of a wordspace with "animal" and "feather" and "flight."
IMO - It should be very sensitive in terms of the different sorts of returns one can get from a prompt determined by the actual "context" of the prompt and what a human understands of it. The more human interpretation that is required to identify the context of critical words and phrases in the prompt, the more problems ChatGPT will have with "novel" prompts. (Things that a human can write and describe that are unique contexts - "A duck charged ahead with credit card debt" might qualify? (meh) Something that's more constrained, but at circuitous at the same time. "An accredited charged duck charged...?)
Returning to the OP for Rimworld application and on-topic niceties:
To get something useful from ChatGPT for Rimworld play, and ignoring all the legal and computational hardware necessities, it would have to be trained on what and to do what?
To be of use to the developer in improving the user-experience, it would have to be trained to do that. That's not too far from its intended use in constructing Turing Test-breaking narratives. But, it does imply that the training set be focused on giving the game player a rewarding, gameplay specific, narrative and that Rimworld generate an appropriately weighted prompt for that. (We assume that the player would find this desirable, but not all Rimworld players care about a "narrative" despite the "Tell your own story. ™" title on the box.)
Achieving this would be an interesting thing, btw. Why? Because to do it right, we'd have to comprehend "why" a player enjoys a game and what a player's expected experience is desired to be...
Figure that out and you kind of quantify "wat make gaem fun." Well, at least for an existing game and players that would like this gameplay addition.
@Tomahawk-NYC-
Q: What is important to and rewarding for Rimworld players in terms of a narrative that can be constructed through gameplay?
Answer that and you'll start creating the framework needed to train something like ChatGPT and to target specific events/data that can be used by Rimworld to create a "prompt" file to guide ChatGPT to produce the desired results. (A simple flat export ain't gonna do it as ChatGPT won't have learned what it means. (IF it has already been trained on Rimworld data, i could be a little bit off on that.))
It "might" be able to produce an interesting, but not necessarily valuable, interpretation "as it is right now," but only given a human properly constructed the needed prompt. AND, that prompt will likely be biased towards a desired result, which the user may not necessarily want nor find very rewarding.
This goes beyond the normal scope of stuffs. But.. I'm dusting out headspace atm.
Now a lot of people have pointed out the issues/problems and many valid points about why and why it might not work. All fair enough. And a lot of people are discussing chatGPT in its current form, but it seems very much "moldable" for lack of a better word to what you are trying to do. I'm guessing chatGPT developers are already working with people on how to better prompt and "guide" the software to their desired ends. If i can get chatGPT to write a very believable thank you letter to someone and throw in some personal extra inputs and get an even better thank you letter, then I imagine it can work wonders in games like Rimworld, if applied in a directed way. Which I guess would be coded. Maybe that means the chatGPT sales Rep talking with a RImworld developer on the best way to input prompts from game data as you play along.
One really crucial point here- i think chatGPT in video games is most useful not as some all powerful AI changing the world or re-writing Unity Engines, but more on the gruntwork level. Spicing up NPC dialogue, history printouts of your colony, that kind of low level but time consuming thing. Its almost like the Roomba of software, vacuuming around the house but missing quite a few spots but still useful overall.
Another question, five years from now, do you think there will or won't be various games using chatGPT in some way?
But everyone else is a[[arently gonna love it. Sigh.
Potential problems:
- Complicated to program with a very broad scope of ideas if you want to even randomize the possible backstories.
- I don't even know 'if' ChatGPT allows you to use their program like that and, if they do, would establishing a connection to a single account be good enough?
- AI misinterpreting what you want it to give you resulting in a lot of time figuring out how to word things to generate the most accurate results.
- There is a chance the mod may always feel clunky to the people using it with no way to fix some of the jank due to ChatGPT limitations.
- File size of the random text it generates for your saves could actually be an issue lol.
There's other problems I'm sure. I personally think it could work but the scope of the project might be one of the biggest mods in Rimworld, so if you're a programmer with the motivation and want something challenging I say go for it, I'll check it out.
I heard news, that one person, put GPT into UE game character and talks him, this conversation turned out to be like with a real person.
https://steamcommunity.com/sharedfiles/filedetails/?id=2960127000&searchtext=rimgpt
I should note this requires keys, but its possible.
mmm i wonder if I could hook up my own local software for text generation instead with a custom voice with modification. Or just tie it to art lmfao.