Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Great news for everyone who had issues with LM Studio! 🎉
After installing the model and dedicating quite a few hours to research and testing, I finally got it working with my mod.
🔧 I’ve added a new setting where you can select LM Studio as your local language model. Once selected, EchoColony will adapt automatically, and your colonists will respond properly 💬.
⚠️ Important tip:
I highly recommend setting the token limit to at least 5500, and ideally around 6000, if possible.
This ensures the colonist has enough context to reply coherently, based on everything they know and what surrounds them.
👉 Let me know if it’s working for you now — or if there’s anything else I should fix!
And here’s the exact endpoint/model I used, just in case:
📸 https://imgur.com/4J6N0TY
Thanks for your support! 🫶
The settings are the same as the picture you uploaded.
I downloaded Meta-Llama-3.1-8B-Instruct-GGUF and applied it right away.
Is there anything I need to change in the settings??
Thank you for all your work, this is mind blowing!
My speech style has changed a lot, and most of all, my language (Korean) expression skills have gotten worse.
Is there a good solution??
If you have a good model or setting method, please let me know.
Not sure, but..
I think you also need to set the "system prompt" of the model.
How should I write it so that it reacts like gemini?
While I was calibrating the mod for local models, I ended up removing too much context from the colonist's prompt. That definitely affected the speech style and language quality. My bad!
I've just improved the prompt structure for LM Studio models — the colonist should now be much more aware of their situation, emotions, tasks, and surroundings, which should help restore a more natural conversation style.
If this version works better, we can continue refining it together! I plan to gradually reintroduce even more context while staying under each model’s token limits.
It's a bit tricky for me to get the balance right since I'm not super familiar with LM Studio or other local models yet — but I’m fully committed to making this work great for everyone, including Korean speakers
it kinda works better than standard lama but maybe need some tuning
Sir, thank you so much for this mode — it's sooo awesome!
I just tested it with neuraldaredevil-8b-abliterated in LM Studio, and it works like a charm. I don’t even know what else to add — it’s so good!
https://i.imgur.com/2IDw6b0.png
neuraldaredevil-8b-abliterated works well. Thanks for everyone for their comments on the modules, i was installing kobold but had no idea how it worked.
<----not a programmer. I couldn't even program my way out of a paper bag nor the bag itself. xD
@Gerik Uylerk
Thank you for having LM Studio added to this mod list. I wanted something mostly offline. The cloud works well, however gets very heavy handed when you try and roll play. One day it is fine the other it is a nono.
Edit 1: Tried another version of lumimaid-magnum-v4-12. Seemed to work then crashed.
neuraldaredevil-8b-abliterated seems to still work well. Responses seem wordier then Google Gemini API. I think you can do more with it however.
Keep up your great work with this mod though!
The mod dev posted an image with the info, however for ease of access
For the first text box labled Local Model Endpoint: http://localhost:1234/v1/chat/completions
Name model: (use whatever one you want)
Once you do find the model you want you need to put the name in the name model text box.
On the most lefthand side there are icons, speech, Dev mode, my models then the search icon. When chats is highlighted you can see the settings icon to the left of your loaded model. The fist thing you see is Context Length. This is what you wish to increase. You will need to load the model and reset this every time you restart LM Studio. In Dev mode you will see whatever model is ready, you can use the copy link to copy the LMM that is already loaded and paste this information in the Name Model.
I use neuraldaredevil-8b-abliterated. The AI can be directed ingame but it seems your FIRST interactions mold the pawn for the rest of the game. Some pawns end up almost poetic and it must be guided to the choice you want.
To everyone else, if you have a fairly strong/robust pc setup you can run this. CPU isn't really used much, nor is ram (~2-3 GB). TBH for things of this nature I do recommend 32gb ram(2x16) for like 50 bucks. GPU is highly used, more so after you have been RPing heavily for days(RL days not ingame), you will hear your gpu fans, however this could also be from switching from the Gemini Cloud to LM Studio. There are other models that are not as powerful.
@Gerik Uylerk
I usually never go this in depth with any guide. Thank you SOOOOOO much for including LM Studio, could not get that working for the life of me. And please feel free to use this miniguide or whatever you call it if it helps anyone else out.
You are a wizard programmer, I am not. I tried using KoboldAI but could not get it to work, LM studio is much easier to setup. I have noticed that when the AI is active Rimworld's ram almost doubles. I understand why.
As a note with your most recent update, it erased my past conversations with my pawns. I just wanted to let you know and put a warning if people update midgame. I know the high risk but wanted to check out the update.
I usually add comments to other mods. I really do not add much to discussions but your mod shows real progress.
As a possible suggestion, I might suggest adding another Discussions thread possibly with PC specs/general specs if others wish to use LM Studio or KoboldAI. I know mileage will very between PCS ect......
Edit - I got it working! It just takes a few minutes for the colonist to respond. Althought my colonist mentions Bella and April. Bella was a dino that died a long time ago that was bonded to Dove so that's a little confusing xD And lord they give you a VERY wordy response.
https://gyazo.com/b0fbc780285f8831c9d5d4b47b986282 https://gyazo.com/d53242b5cd94663f8dc8869f73b62c4a
Hopefully if you chat with a colonist long enough they'll actually start remembering things - I ended up using neuraldaredevil-8b-abliterated as well and guess I'll test it out some more to figure out if there's a way to get a faster reply? LM Studio seems good so far, Gemini is likely the best right now with how the colonists react and not being overly wordy.
Edit - a hour later it stopped working again and giving the same error xD
If you mean Gemini, give it some time before non-stop chatting ect.... I did the same thing and got the error, since you are prolly using the free version there is a limit, and if you think of it since it is probably later in the day more people are on using Gemini as well.