Zainstaluj Steam
zaloguj się
|
język
简体中文 (chiński uproszczony)
繁體中文 (chiński tradycyjny)
日本語 (japoński)
한국어 (koreański)
ไทย (tajski)
български (bułgarski)
Čeština (czeski)
Dansk (duński)
Deutsch (niemiecki)
English (angielski)
Español – España (hiszpański)
Español – Latinoamérica (hiszpański latynoamerykański)
Ελληνικά (grecki)
Français (francuski)
Italiano (włoski)
Bahasa Indonesia (indonezyjski)
Magyar (węgierski)
Nederlands (niderlandzki)
Norsk (norweski)
Português (portugalski – Portugalia)
Português – Brasil (portugalski brazylijski)
Română (rumuński)
Русский (rosyjski)
Suomi (fiński)
Svenska (szwedzki)
Türkçe (turecki)
Tiếng Việt (wietnamski)
Українська (ukraiński)
Zgłoś problem z tłumaczeniem
Hey everyone, I fixed the issue with the VPet plugin connecting to the API! 🎉
1. **Install Ollama WebUI** using your preferred method, or you can try the script shared by the original guide.
2. After that, **generate your API key** as required.
3. Next, **download my fixed OpenAPI plugin** (link provided below).
4. **Extract the plugin folder** and drop it into your **VPet mod folder**. For example, place it here:
`D:\Games\SteamLibrary\steamapps\common\VPet\mod\1999_OpenAi\plugin`
5. Once done, **restart VPet**, and you should be good to go! The plugin will now connect properly to the API.
Let me know if you encounter any issues!
{LINK USUNIĘTY}https://www.mediafire.com/file/3fpqia5tu91m2uc/1999_OpenAi.zip/file
ENG: Choose the one that's on the list and write in my profile, I will answer the same!
💜+Rep Clutch King 👑
💜+Rep 300 iq 🧠
💜+Rep ak 47 god👻
💜+Rep SECOND S1MPLE😎
💜+Rep relax teammate🤤
💜+Rep Killing Machine 😈
💜+Rep AWP GOD 💢
💜+Rep kind person💯
💜+Rep ONE TAP MACHINE 💢
💜+Rep nice profile 💜
💜+Rep add me pls😇
💜+Rep very nice and non-toxic player😈
💜+Rep AYYYY LMAO
💜+Rep nice flicks👽
💜+Rep king deagle💥
💜+Rep best👹
💜+Rep killer👺
💜+Rep Good player 💜
💜+Rep Amazing Tactics 👌
💜+Rep Top Player 🔝
Ayuda, no se hacer esto.
Also, mine says this:
Warning: Loaded default instruction-following template for model.
Warning: $This model maximum context length is 2048 tokens. However, your messages resulted in over 1346 tokens and max_tokens is 2048.
Output generated in 46.23 seconds (1.90 tokens/s, 88 tokens, context 1346, seed 981057503)
I'm not sure what token means as well since this is my first time tweaking around with this AI.
I tried it using first model I could find "TheBloke/Llama-2-7b-Chat-GPTQ"
and I tried changing the model loader to exllama but I didn't notice any difference.
once again, sorry for asking stupid questions.
2023-08-18 17:44:38 INFO:Loading the extension "gallery"...
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`