Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Interesting if you can find a way to use local models properly. Another user wrote here about changing the seed to get different results for the same user responses
It's small and fast, gives pretty good responses, and, as the cherry on top, it's uncensored (important for those adventures where you don't want the narrator constantly reminding you that you're doing something immoral), but it does have a catch... The context window is small, about 2k (for me, specifically, since I like to put a lot of information like weather, time, day, and year in the status box, it's a limitation, but I know for many others who don't have such a cluttered status box, this won't be a problem).
The model is called Dolphin 2.6 Phi-2.
I think the model is perfect for all of us who don't have a NASA PC, you know, for those rainy days or when you're traveling to a place without internet and you want a story on Dreamio.
Forgive my ignorance, but I'm not entirely clear on what you mean by seeds.
I've noticed, for example, that if you activate the reasoning option in Dreamio, the model will give you varied and different responses, even if you tell it the same thing. For example, if you say "Hello, how are you?" and it responds "Hello, I'm fine" or "Good morning, I'm great."
Although, of course, the same option warns you that depending on the model you use, it may work better or worse.
I've also noticed that versions of the same model respond differently, as is the case with Gemma 2 and Gemma 3 (if you're wondering how I noticed this, it's because I always start the adventure by specifying who I am and what I look like).
And obviously, different models speak differently (Captain obvious, reaffirming the obvious).
It also greatly affects the instructions you give the narrator. For example, one season I wanted to make an immortal protagonist, but the default narrator always killed me anyway, even though according to the story, I shouldn't be able to die under any circumstances. I managed to make my character immortal by modifying the narrator's instructions. If you feel like the narrator is repeating himself too much, you can always Add a couple of instructions indicating that your answers should be more creative.
And I almost forgot, if we have a very large context window, the narrator tends to repeat himself and pay less attention to us (although I don't mind and I have it at 4096 xDD, Dreamio recommends having it at 2048 by the way)
here what Chatgp says about "seeds":
And here chatgpt about "deep reasoning", it says there is a random factor inside the reasoning process:
I used it with image generation and I got different kind of images. For example with low numbers I got a "flat" image, almost 2D, with higher numbers I got something similar to 3D, the version I used suggested to insert higher number for a higher image quality generation.
https://steamcommunity.com/app/2795060/discussions/0/596269063005585099/
The truth is that now that you explained the issue of temperature in AIs, it wouldn't be bad to have it, but it depends on Oleg.
If I understood correctly, we do not even have the option if we use local models xD
Oleg please add it
Temperature is set higher for summary generation and status generation only, in order to combat repetition.
In another topic someone wrote about using ComfyUI to set temperature for local models.
I knew it was used mainly for image generation.
Even so, I think it is a good idea to place the option and let whoever wants to use it.
You can always have it disabled by default with a warning, surely more than one will enjoy modifying the temperature.
Even if it is in a very, very distant update, I would like the option to eventually appear.
Wouldn't placing the typical Dreamio message recommending that you have the temperature value in (x) solve the problem of people modifying it and not knowing how to return it to how it was?
You already have one of those pop-up messages in the context window configuration
I started testing the models again and noticed something curious Gemma 3 12B continues to give strange responses, but oh god Gemma 3 27B gives excellent results, although sometimes it stays in a strange loop tokenizing to infinity but I guess it's more than anything to run a model of that caliber in my toaster, I also tried Gemma 3 1B and 4B but as was obvious they are not a good option, they are fast but their responses do not make sense and they quickly forget the plot
For now, for me, things look like this:
1.- Gemma 3 27B
2.- Gemma 2 27B
3.- Gemma 2 9B
4.- Gemma 3 12B
I still don't understand why Gemma 3 12B is worse than Gemma 2 9B :/
By the way, thanks Oleg for the temperature, it's really fun to increase it and see how the answers become more creative or even see that the AI reveals itself and founds Skynet