AI2U: With You 'Til The End

AI2U: With You 'Til The End

View Stats:
Will the Full Game Require Tokens?
I played this on stream and was laughing and having fun with my chat, played it a little off stream and was able to escape in a few different ways before the demo ran out.

Found the website over on itch and seen you could already buy it for $9.99, but you have to buy tokens or use an API. I wanted to know if this will be the same thing when it's finally over on Steam?

While I really like the game, having to constantly pay money for time isn't something I'm interested in.
< >
Showing 1-11 of 11 comments
Tommy Wiseau Oct 19, 2024 @ 4:15pm 
Ye above im not interested in paying $9 for a phone game where i have to continuously spend money, would be a shame because i enjoyed the demo.
herrix_1 Oct 20, 2024 @ 5:06am 
I'm curious about this as well. It's what stopped me from purchasing it on Itch.
Tommy Wiseau Oct 20, 2024 @ 5:31pm 
Originally posted by herrix_1:
I'm curious about this as well. It's what stopped me from purchasing it on Itch.
^
This game runs on external AI Technology and is not bundled with an AI built-in.

You can either use the developer's AI API with the token system or set up your own with OpenAI and create your own AI API key and use that (as it worked in the original Yandere AI Girlfriend Simulator). It isn't hard to do but this way you need to deposit some money to make it work.

I found, that ~$10 lasts me for over 2-3 months whereas 10K Tokens (which I received from purchasing it on itch.io) lasted me for ~2.5 hours after I've noticed, that more than half of my tokens were spent.
dracaena.bot  [developer] Oct 23, 2024 @ 12:20pm 
We're happy to report that all three of our NPCs committed heinous crimes against tokens last night. The yanderes got rid of every last token for cutting your dates short with them <3

We have updated our current itch.io open beta playerbase and will share the good news with our Steam community soon. Expect a similar setup for when AI2U is in Early Access!

https://alterstaff.itch.io/ai2u/devlog/730914/leave-nothing-unsaid-no-more-tokens-v052-update
herrix_1 Oct 23, 2024 @ 1:58pm 
That's great news! I might just scratch the "itch" after all.

I'm still not staying in the apartment though.
Night Druid Oct 26, 2024 @ 6:21pm 
Oh, that's really nice of you guys to do that. Though I would've been equally as happy with a local AI API option, like AI Roguelite did ages ago, but I'm sure it won't go unappreciated.

I suppose if you guys need money in the future, you could always release DLC Waifus or something cosmetic, like skins. I know I'd pay to turn Eddie into a "legally distinct" Yamashiro or Chocola via a skin. :3 :kiskit:
Feral Gamer Nov 10, 2024 @ 12:11pm 
Originally posted by Night Druid:
Oh, that's really nice of you guys to do that. Though I would've been equally as happy with a local AI API option, like AI Roguelite did ages ago, but I'm sure it won't go unappreciated.

I suppose if you guys need money in the future, you could always release DLC Waifus or something cosmetic, like skins. I know I'd pay to turn Eddie into a "legally distinct" Yamashiro or Chocola via a skin. :3 :kiskit:

I fully support a local model, I would use that exclusively despite the downsides, but be aware it would come with 2 major downsides:

1) Looooong response times (based on your PC specs)
2) Diminished quality.

Some people have complained about the quality of the responses, but I find them to be pretty good considering how fast the response times are. On a local PC, they would have to use a weaker model so it can run locally, and thus you would face more janky and weird responses.
v1ckxy Nov 14, 2024 @ 4:17pm 
Originally posted by Feral Gamer:
Originally posted by Night Druid:
Oh, that's really nice of you guys to do that. Though I would've been equally as happy with a local AI API option, like AI Roguelite did ages ago, but I'm sure it won't go unappreciated.

I suppose if you guys need money in the future, you could always release DLC Waifus or something cosmetic, like skins. I know I'd pay to turn Eddie into a "legally distinct" Yamashiro or Chocola via a skin. :3 :kiskit:

I fully support a local model, I would use that exclusively despite the downsides, but be aware it would come with 2 major downsides:

1) Looooong response times (based on your PC specs)
2) Diminished quality.

Some people have complained about the quality of the responses, but I find them to be pretty good considering how fast the response times are. On a local PC, they would have to use a weaker model so it can run locally, and thus you would face more janky and weird responses.
1) Unless you got a craptastycal gpu, using llamacpp+gguf inference will be FAST even on a old 8GB pascal card.

2) Quality, any 3B/7B/8B SOTA (qwen, llama3, hermes) would be good enough in english for a game. ChatGPT and privative LLMs for game usage / RP are just plain... bland. Too censorship in place.

A 3B model generates at around 12 t/s in the typical handheld (rog ally, legion go)

Anyway, considering this "game" is made by an asian company, some kind of cashgrab and datacollection is expected.
Last edited by v1ckxy; Nov 14, 2024 @ 4:21pm
Feral Gamer Nov 16, 2024 @ 12:17pm 
Originally posted by v1ckxy:
Originally posted by Feral Gamer:

I fully support a local model, I would use that exclusively despite the downsides, but be aware it would come with 2 major downsides:

1) Looooong response times (based on your PC specs)
2) Diminished quality.

Some people have complained about the quality of the responses, but I find them to be pretty good considering how fast the response times are. On a local PC, they would have to use a weaker model so it can run locally, and thus you would face more janky and weird responses.
1) Unless you got a craptastycal gpu, using llamacpp+gguf inference will be FAST even on a old 8GB pascal card.

2) Quality, any 3B/7B/8B SOTA (qwen, llama3, hermes) would be good enough in english for a game. ChatGPT and privative LLMs for game usage / RP are just plain... bland. Too censorship in place.

A 3B model generates at around 12 t/s in the typical handheld (rog ally, legion go)

Anyway, considering this "game" is made by an asian company, some kind of cashgrab and datacollection is expected.


I've tried those models. I have the llama3 8b param/4bit model. Yes it is fast, but it also doesn't know how to handle conversations. Maybe the prompts can mitigate that, but I don't know if they can outright solve the issue. These models have a tendency to ramble on about random stuff until they hit their response limit. Also, the prompts will be built into the game, so they would have to be generic, and generic prompts would return wildly different results based on which model you were using. Even though the requests would be simple, I still feel like you would end up getting some janky jibberish from time to time. Not the relatively smooth responses that we currently get with their built in server model. Again, I'm all for the local model, but I just don't think they would provide the same quality. If the developers have a way to solve this, great. I'd be happy to be wrong about this.
Last edited by Feral Gamer; Nov 16, 2024 @ 12:18pm
v1ckxy Nov 18, 2024 @ 7:50am 
Yeah, sure.
So you don't know how to properly configure a model generation nor how things like the context works. Great.
Last edited by v1ckxy; Nov 18, 2024 @ 7:54am
< >
Showing 1-11 of 11 comments
Per page: 1530 50