Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
I am actually more excited for this game than atomfall at this point lol
Tbh in terms of wallpaper, item textures etc I am not sure anyone would go for NSFW stuff lol.
With the local model included (if it is local) it is likely a very censored model, one of the reasons i would want to use my own models or api's, not for NSFW but simply that censored models often have issues because large parts of the training data are missing or locked off. (sd3 woman laying on grass, is a great example of this). Though it could be clip text encoder level censorship which would be better.... though likely jailbroken the first week lol.
I wandered off topic... Yes even if its on our hardware they will put a policy in place to prevent the sharing (on their servers) of NSFW content. I doubt, beyond the obligatory uncensor mod, people will be using this game for nsfw stuff though, so mute issue?
We could run a llm on another machine to offload the workflow (or in my case i would use featherless to use a more powerful LLM than my machine could handle)
Time will tell I guess.
"The Tools models operate on an on-device basis, minimizing environmental impact by reducing CO2 emissions typically associated with large-scale server operations. This design ensures user privacy while supporting sustainable development. "
in "AI-based Tools Terms and Disclaimer" on https://playinzoi.com/en/clause/ai_tools_terms
Although I would suppose that they would just use AI to get through all of the logs to mark potential miss-uses and involve humans only when marked as potentially breaking rules.