VPet
Zbyt mało ocen
How to use your Local AI Model (through ooba's webui)
Autorstwa: yukinanka
tl;dr Enable openai extension and use localhost:5001 instead of 0.0.0.0:5001
   
Przyznaj nagrodę
Ulubione
Ulubione
Usuń z ulubionych
1. Install oobabooga text-generation-webui
https://github.com/oobabooga/text-generation-webui/releases/tag/installers

"Just download the zip, extract it, and double click on "start_windows.bat". The web UI and all its dependencies will be installed in the same folder."
2. Enable openai extension
  • At your oobabooga\oobabooga-windows installation directory, launch cmd_windows.bat (or micromamba-cmd.bat, if you used the older version of webui installer.)

  • Go to the extension's directory by
    cd .\text-generation-webui\extensions\openai
  • Install the requirements,
    pip3 install -r requirements.txt
  • Launch webui. At the Session tab, enable openai extension. "Apply and restart" afterwards.
    If everything goes okay, you'll see this line on your console:
    OpenAI compatible API ready at: OPENAI_API_BASE=http://0.0.0.0:5001/v1
3. Connect to your Local API
Now it's time to let our Chibi know how to access our local API.
  • Right click on your character, select System->Settings

  • Under System->Chat Settings, select "Use API requested from ChatGPT"

  • Open the ChatGPT API Settings. Inside the setting panel,
    Set API URL to: *If you instead set it to http://0.0.0.0:5001/v1/chat/completions you will get SocketException error . I don't know why, but using localhost in place of 0.0.0.0 makes .Net-kun very happy.

    API Key: Can be whatever. Ooba's webui side does not check this key.

Now you can load the local LLM of your favorite, and talk with your Chibi all the things!

Komentarzy: 22
Aboda7m 20 grudnia 2024 o 4:50 
**Reply:**

Hey everyone, I fixed the issue with the VPet plugin connecting to the API! 🎉

1. **Install Ollama WebUI** using your preferred method, or you can try the script shared by the original guide.
2. After that, **generate your API key** as required.
3. Next, **download my fixed OpenAPI plugin** (link provided below).
4. **Extract the plugin folder** and drop it into your **VPet mod folder**. For example, place it here:

`D:\Games\SteamLibrary\steamapps\common\VPet\mod\1999_OpenAi\plugin`

5. Once done, **restart VPet**, and you should be good to go! The plugin will now connect properly to the API.

Let me know if you encounter any issues!
{LINK USUNIĘTY}
TestName 11 grudnia 2024 o 5:05 
It doesn't work at all. As a result of trying the method you suggested, I almost had a problem running oobabooga itself, and after resolving the situation through an update, I checked the web UI and other places, but there were no changes. Where is the ‘character’ you speak of?
Абдуло 29 kwietnia 2024 o 2:05 
RUS: Выберите что то одно из этого списка и напишите в моём профиле, отвечу тем же!
ENG: Choose the one that's on the list and write in my profile, I will answer the same!

💜+Rep Clutch King 👑
💜+Rep 300 iq 🧠
💜+Rep ak 47 god👻
💜+Rep SECOND S1MPLE😎
💜+Rep relax teammate🤤
💜+Rep Killing Machine 😈
💜+Rep AWP GOD 💢
💜+Rep kind person💯
💜+Rep ONE TAP MACHINE 💢
💜+Rep nice profile 💜
💜+Rep add me pls😇
💜+Rep very nice and non-toxic player😈
💜+Rep AYYYY LMAO
💜+Rep nice flicks👽
💜+Rep king deagle💥
💜+Rep best👹
💜+Rep killer👺
💜+Rep Good player 💜
💜+Rep Amazing Tactics 👌
💜+Rep Top Player 🔝
Aimatt 14 października 2023 o 6:25 
hi, how to select character when you connected to API?
Shadowdemonx9 18 września 2023 o 14:27 
Definitely took some fiddling with, but got it working with the current version from ooba. Thanks for getting me into checking this stuff out honestly.
Tewi Inaba 2 września 2023 o 16:50 
i can't get this to work i keep getting api call failed error
rat bastard 23 sierpnia 2023 o 15:17 
How to open the webui...?
Lilith 18 sierpnia 2023 o 11:42 
yo no soy capaz de hacer funcionar la ai, he hecho los pasos pero la pet me dice "error de sistema o error de ruta" y cosas asi
Ayuda, no se hacer esto.
Bappo 18 sierpnia 2023 o 10:49 
Sorry for asking more question but how do you know if the GPU does/doesn't have enough vram?

Also, mine says this:
Warning: Loaded default instruction-following template for model.
Warning: $This model maximum context length is 2048 tokens. However, your messages resulted in over 1346 tokens and max_tokens is 2048.
Output generated in 46.23 seconds (1.90 tokens/s, 88 tokens, context 1346, seed 981057503)

I'm not sure what token means as well since this is my first time tweaking around with this AI.
I tried it using first model I could find "TheBloke/Llama-2-7b-Chat-GPTQ"

and I tried changing the model loader to exllama but I didn't notice any difference.

once again, sorry for asking stupid questions.
Hitler 18 sierpnia 2023 o 2:54 
bin C:\Users\*\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
2023-08-18 17:44:38 INFO:Loading the extension "gallery"...
Running on local URL: http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`