AI Roguelite

AI Roguelite

View Stats:
kkk Jan 31 @ 2:33am
Dose any game company consider integrating DeepSeek R1 into gaming environments?
I wish you guys capture the chance,,,
< >
Showing 1-8 of 8 comments
Glorfunk Jan 31 @ 11:06am 
i ran deepseek locally on ollama, tbh its not really at the lwevel to be used or this type o ga,me yet. Someone needs to train it for this type of thing first
agrimes Jan 31 @ 2:25pm 
There is some hope for the deepseek distilled Qwen but the issue is that it burns through its token alotment doing its thinking and doesn't produce output, the max_tokens limit must be removed so it can take the time it needs to come up with a good response.
Last edited by agrimes; Jan 31 @ 2:25pm
In the latest beta there is better max_tokens override support for OpenAI/Oobabooga API in the config files under "model-params" (details in the README), and HTML tags are automatically removed from responses
Addendum: It just occurred to me the text surrounded by <think> isn't actually removed yet. I'll add the functionality in a later update soon
agrimes Feb 23 @ 3:42pm 
LM studio has been updated and now separates out the <think> content to a different segment of the reply packet. The issue of context length remains.
Thanks. The configurable context length should be working in the beta as detailed in my other comment
agrimes Feb 24 @ 9:45pm 
You want me to add something to some file in model-params? I'm looking at it but I need an example or detailed guide. =\
AI Roguelite: AI-generated RPG  [developer] Feb 24 @ 11:45pm 
Originally posted by agrimes:
You want me to add something to some file in model-params? I'm looking at it but I need an example or detailed guide. =\

Go to model-params, openai-common-params.txt, and try replacing the json with:

{
"max_tokens": 2000
}

(or whatever the desired number may be)

Be aware changes to the default folder could get overwritten in updates, so best practice is to create a mod and use a modded folder
< >
Showing 1-8 of 8 comments
Per page: 1530 50