Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
You can either use the developer's AI API with the token system or set up your own with OpenAI and create your own AI API key and use that (as it worked in the original Yandere AI Girlfriend Simulator). It isn't hard to do but this way you need to deposit some money to make it work.
I found, that ~$10 lasts me for over 2-3 months whereas 10K Tokens (which I received from purchasing it on itch.io) lasted me for ~2.5 hours after I've noticed, that more than half of my tokens were spent.
We have updated our current itch.io open beta playerbase and will share the good news with our Steam community soon. Expect a similar setup for when AI2U is in Early Access!
https://alterstaff.itch.io/ai2u/devlog/730914/leave-nothing-unsaid-no-more-tokens-v052-update
I'm still not staying in the apartment though.
I suppose if you guys need money in the future, you could always release DLC Waifus or something cosmetic, like skins. I know I'd pay to turn Eddie into a "legally distinct" Yamashiro or Chocola via a skin. :3
I fully support a local model, I would use that exclusively despite the downsides, but be aware it would come with 2 major downsides:
1) Looooong response times (based on your PC specs)
2) Diminished quality.
Some people have complained about the quality of the responses, but I find them to be pretty good considering how fast the response times are. On a local PC, they would have to use a weaker model so it can run locally, and thus you would face more janky and weird responses.
2) Quality, any 3B/7B/8B SOTA (qwen, llama3, hermes) would be good enough in english for a game. ChatGPT and privative LLMs for game usage / RP are just plain... bland. Too censorship in place.
A 3B model generates at around 12 t/s in the typical handheld (rog ally, legion go)
Anyway, considering this "game" is made by an asian company, some kind of cashgrab and datacollection is expected.
I've tried those models. I have the llama3 8b param/4bit model. Yes it is fast, but it also doesn't know how to handle conversations. Maybe the prompts can mitigate that, but I don't know if they can outright solve the issue. These models have a tendency to ramble on about random stuff until they hit their response limit. Also, the prompts will be built into the game, so they would have to be generic, and generic prompts would return wildly different results based on which model you were using. Even though the requests would be simple, I still feel like you would end up getting some janky jibberish from time to time. Not the relatively smooth responses that we currently get with their built in server model. Again, I'm all for the local model, but I just don't think they would provide the same quality. If the developers have a way to solve this, great. I'd be happy to be wrong about this.
So you don't know how to properly configure a model generation nor how things like the context works. Great.