Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Try again and let me know if you can now see your colonists' replies.
https://i.imgur.com/EnnIMJe.png
I just need to figure out how to get it to output stuff properly, since I just had it give quite a few lines of dialogue between itself and I, and it keeps repeating the 'stay in character' stuff in the output as the first line:
https://i.imgur.com/0qMmfbp.png
For context, I only gave 2 prompts, there. Once at the top, and once below. The parts with no line break between them were all from the AI.
Is there any way you could provide a new section in the config where we can specify our own regex which will be used to grab the output we want?
It seems the mod always gives it a token limit of 512. Though, since I don't see one given with the input at all, it's possible that's just a default. Kobold's own lite interface gives a context and token size in there.
Maybe we could fully customize it at some point? Or at least be able to stick extra data in there?
It's also possible things are getting cut off, since the prompt just ends with '...' in the middle of one of the events, and I don't see the question I gave in the console at all.
I also had the AI refer to Brendan as another character, possibly influenced by the events which specify Brendan. Maybe you can cleverly replace 'Brendan' with 'me' or 'I'?
For example, there's an event like this listed:
'Brendan made a comment about sports to Edmund.'
If it could be replaced before sending it to the AI with 'I made a comment about sports to Edmund.', that may improve things.
Another type, 'Brendan and Edmund chatted about deep space mining.', would be better suited to getting replaced with 'Me and Edmund chatted about deep space mining.'
Or, even better, 'Edmund and I chatted about deep space mining.'
It seems that since my mod wasn’t explicitly setting a token limit, Kobold was defaulting to around 512 tokens, which is way too short for a colonist to respond properly.
I also noticed that the model was taking the prompt very literally, often trying to write a story instead of having a natural conversation.
I’ve made several improvements to how instructions are sent and how responses are handled, to make things much clearer for Kobold. On top of that, I’ve increased the token limit to 4096. So far, this has been working well on my end.
But if needed, I can also add a new setting that lets players customize the token count to their preference.
Thanks again for your help—and please let me know if the responses are looking better now!
Also, don’t forget to set Kobold as your model provider in the mod’s configuration!
https://imgur.com/a/uYjMI64
This is from the console, giving a prompt with the in-built Kobold WebUI:
https://pastebin.com/ZGt1RTee
I did also make sure that Kobold is the provider set in the mod settings.
I’ve replaced "max_tokens" with "max_length" to see if that works better now.
Sorry for the inconvenience, and thank you for your patience.
But it seems like the prompts are far less detailed, now?
The previous one had all sorts of information. Telling the AI that they're a colonist, aware of their body, to stay in character, details about recent events, relationships, character traits, even the personality from 1,2,3 Personalities.
Now it seems to just be their name, age, their location, activity, numbers about their health and mood, their inventory, and the top skill.
Before, I was getting pretty good responses when within the token limit, but here the AI didn't even try to stay in character, just imagining a story with the short prompt given.
This has happened every time I've tried it.
To avoid hitting the token limit, I had removed some information from the final prompt, but I think I cut out too much. I've now added more context to the colonist.
Could you show me the latest console output you're getting with this update?
If the token usage is still low, I’ll gradually add more detail to the colonist’s prompt.
It’s a bit tricky for me to calibrate this from my side since I’m not too familiar with local models yet, but I’m determined to get this working properly for everyone.
Both the input and output token limits can be customized in Kobold.
I had my input set to 16k, and output at 4k. Manually passing in one of the original types of prompts, I got really good results.
If it's a problem, though, perhaps you could have a setting for how detailed the prompt should be, or checkboxes for what types of information should be passed? So you could enable/disable the allied settlements and names, the recent events, whatever is needed to trim it down if necessary.