inZOI
.
.
Ultima modifica da Father Lamb; 4 apr, ore 14:02
< >
Visualizzazione di 16-25 commenti su 25
Messaggio originale di Farlight:
Messaggio originale di Father Lamb:


It does work on it. Buddy, I've got both PC's in the same room.

1. Ryzen 7950X3D, 7900 XTX
2. Ryzen 7900X, RTX 4090

Both tested and showing working the same and fully functional.

NOTHING IS MISSING, It acts as it does on the 4090 build.

But yeah im making it up to just... do so.. Got it.

The thing is if you'd read I said it "WILL" bug out on both eventually or not at all. It's hit or miss, hence the whole EXPERIMENTAL feature its self.

Now, I wont be replying again.

And I can't play it again to prove so, I did refund it. So that's that.

The game is not currently worth its price at all. I said what I said, tested what I did, and know what I saw in real time on both systems.
Sorry bro but it doesn't actually work.

AMD needs to steps up and do something equivalent to NVIDIA ACE for UE5 if they don't want to be left behind even further.


THE FACT IT WORKED IDENTICALLY VS THE 4090 IN MY OWN PC. YEAH IT WORKED....

How are you people so stuck on saying it doesnt?


IT DID.

The prompts, the actions, the AI it all FUNCTIONED on both pcs IDENTICALLY.






HOW?


THEY BOTH USE DIRECTML/ONXX BASED AI.

BOTH OF WHICH AMD SUPPORT.




Idc what inzoi says, ACE is NOT limited to NVIDIA ONLY.


The half of you sound so stupid. I'm done here.

EDIT:

To sum up my claim. As I said, On the 7900 XTX it would WORK FULLY and THE EXACT SAME as to my 4090 build, No difference. BUT.. It would randomly quit working far more than the 4090 build would.

Meaning, YES IT DID AND DOES WORK just not FULLY ALL THE TIME

---

Hopefully half of you understand now.

So yes, it works.





Nvidia = 95% of the time
AMD = 75% of the time



Best way I can put it.

And even so, this feature OFFERS NOTHING for this game.

Its a soulless empty pos.
Ultima modifica da Father Lamb; 4 apr, ore 13:10
Messaggio originale di First5trike:
If anyone wants to read the specs or the access the SDK from Nvidia here yah go.

https://developer.nvidia.com/blog/nvidia-rtx-neural-rendering-introduces-next-era-of-ai-powered-graphics-innovation/

If you want to re-read what I said above editing it, there you go.

♥♥♥♥♥♥♥♥♥♥♥♥♥♥ WORKED.
Messaggio originale di Father Lamb:
Messaggio originale di First5trike:
If anyone wants to read the specs or the access the SDK from Nvidia here yah go.

https://developer.nvidia.com/blog/nvidia-rtx-neural-rendering-introduces-next-era-of-ai-powered-graphics-innovation/

If you want to re-read what I said above editing it, there you go.

♥♥♥♥♥♥♥♥♥♥♥♥♥♥ WORKED.

I am writing an adventure game front end for LLM models.

You can stomp up and down telling me it works all you want. You might as well be telling me you can fly. Do I believe you enabled the function? Yes. Do I believe you got it working properly on an AMD card by just enabling the function in the config file? No.

I have an RX 6800 in my desktop rig and two Nvidia Tesla P100's in my development rig so I'm very familiar with the technology.

You can't run an Nvidia tensor interpreter on an AMD rig. That is why you need to run Koboldcpp-ROCM instead of Koboldcpp for LLM models. Or have to install ROCM in LM Studios instead of just running the cuda llama runtime.

Its the same reason you need to install directml in Stable Diffusion to get it to run on an AMD card.

I have no idea why you need to try and convince everyone when you know the other folks with AMD cards will go into the config files enable the line and realise after 5 minutes in game its not working. Like what is the point in this?
Ultima modifica da First5trike; 4 apr, ore 13:17
Messaggio originale di First5trike:
Messaggio originale di Father Lamb:

If you want to re-read what I said above editing it, there you go.

♥♥♥♥♥♥♥♥♥♥♥♥♥♥ WORKED.

I am writing an adventure game front end for LLM models.

You can stomp up and down telling me it works all you want. You might as well be telling me you can fly. Do I believe you enabled the function? Yes. Do I believe you got it working properly on an AMD card by just enabling the function in the config file? No.

I have an RX 6800 in my desktop rig and two Nvidia Tesla P100's in my development rig so I'm very familiar with the technology.

You can't run an Nvidia tensor interpreter on an AMD rig. That is why you need to run Koboldcpp-ROCM instead of Koboldcpp for LLM models. Or have to install ROCM in LM Studios instead of just running the cuda llama runtime.

Its the same reason you need to install directml in Stable Diffusion to get it to run on an AMD card.

I have no idea why you need to try and convince everyone when you know the other folks with AMD cards will go into the config files enable the line and realise after 5 minutes in game its not working. Like what is the point in this?


So I was imagining things huh.

If I didn't refund it I'd GLADLY show you to shut you and everyone else up.

I said it WORKED as to how well? NOT WELL

it would WORK and then stop, given the no thoughts issue people talk about.

BUT WHEN IT DID WORK it would WORK FINE

2 different systems, 2 different GPU's.

I have no reason to make this up. Good god..

So ask your self, what is the point of this then?

If I'm making it up, Guess I'm on some crack rocks then.
Messaggio originale di First5trike:
Messaggio originale di Father Lamb:

If you want to re-read what I said above editing it, there you go.

♥♥♥♥♥♥♥♥♥♥♥♥♥♥ WORKED.

I am writing an adventure game front end for LLM models.

You can stomp up and down telling me it works all you want. You might as well be telling me you can fly. Do I believe you enabled the function? Yes. Do I believe you got it working properly on an AMD card by just enabling the function in the config file? No.

I have an RX 6800 in my desktop rig and two Nvidia Tesla P100's in my development rig so I'm very familiar with the technology.

You can't run an Nvidia tensor interpreter on an AMD rig. That is why you need to run Koboldcpp-ROCM instead of Koboldcpp for LLM models. Or have to install ROCM in LM Studios instead of just running the cuda llama runtime.

Its the same reason you need to install directml in Stable Diffusion to get it to run on an AMD card.

I have no idea why you need to try and convince everyone when you know the other folks with AMD cards will go into the config files enable the line and realise after 5 minutes in game its not working. Like what is the point in this?

And don't come in here like half the people do with the whole "Im a dev" talk or "IT" talk.

So leave that ♥♥♥♥ behind bud.

I know how to operate a damn PC and know what works in front of my face as it should as the trailers shown and as my own experience shown with the 4090 and the XTX doing the SAME THING entirely.

I said, again, it would work, did work, does work, BUT NOT FULLY all the time, if YOU get it to work.

I didn't say all I did was just "EDIT" one simple line of code lmfao
Messaggio originale di First5trike:
Messaggio originale di Father Lamb:

If you want to re-read what I said above editing it, there you go.

♥♥♥♥♥♥♥♥♥♥♥♥♥♥ WORKED.

I am writing an adventure game front end for LLM models.

You can stomp up and down telling me it works all you want. You might as well be telling me you can fly. Do I believe you enabled the function? Yes. Do I believe you got it working properly on an AMD card by just enabling the function in the config file? No.

I have an RX 6800 in my desktop rig and two Nvidia Tesla P100's in my development rig so I'm very familiar with the technology.

You can't run an Nvidia tensor interpreter on an AMD rig. That is why you need to run Koboldcpp-ROCM instead of Koboldcpp for LLM models. Or have to install ROCM in LM Studios instead of just running the cuda llama runtime.

Its the same reason you need to install directml in Stable Diffusion to get it to run on an AMD card.

I have no idea why you need to try and convince everyone when you know the other folks with AMD cards will go into the config files enable the line and realise after 5 minutes in game its not working. Like what is the point in this?


Oh and as for your Koboldcpp remark, are you saying that too doesn't work in AMD rigs? LMFAO

Mind you, I've used it for Skyrim yes CPP not The ROCM equivalent.

Oh buddy what have you gotten your self into.

I'm ready if you are.
Messaggio originale di First5trike:
Messaggio originale di Father Lamb:

If you want to re-read what I said above editing it, there you go.

♥♥♥♥♥♥♥♥♥♥♥♥♥♥ WORKED.

I am writing an adventure game front end for LLM models.

You can stomp up and down telling me it works all you want. You might as well be telling me you can fly. Do I believe you enabled the function? Yes. Do I believe you got it working properly on an AMD card by just enabling the function in the config file? No.

I have an RX 6800 in my desktop rig and two Nvidia Tesla P100's in my development rig so I'm very familiar with the technology.

You can't run an Nvidia tensor interpreter on an AMD rig. That is why you need to run Koboldcpp-ROCM instead of Koboldcpp for LLM models. Or have to install ROCM in LM Studios instead of just running the cuda llama runtime.

Its the same reason you need to install directml in Stable Diffusion to get it to run on an AMD card.

I have no idea why you need to try and convince everyone when you know the other folks with AMD cards will go into the config files enable the line and realise after 5 minutes in game its not working. Like what is the point in this?


Actually, nah. Not going to reply any more towards this.

♥♥♥♥♥ hilarious at this point.

Enjoy the ♥♥♥♥ features, dull gameplay loop & horrid UE5 flips.

I'm out.

I know what DID work and what DIDN'T AND TO WHAT EXTENT IT DID.

Did I say "WORKING AMAZINGLY WITHOUT ISSUE 100% OF THE TIME"

No.

Did it slightly? YES.

Meaning "DOES IT"

"Pretty much"





No go on lil nooblet, do your thing.


-------------------------



[KRAFTON] ASH (inZOI) Jan 20, 2025, 11:42 GMT+9

Dear FerryFit, Thank you for reaching out with your thoughtful questions about the Smart Zoi feature. We understand the excitement surrounding this feature and appreciate the opportunity to clarify some important points. First, regarding the GPU compatibility for Smart Zoi, we can confirm that the feature will indeed be supported on a wide range of GPUs, provided they meet certain performance criteria. While it’s true that we’ve partnered with Nvidia for this feature, Smart Zoi will not be exclusive to Nvidia’s latest GPUs.
Instead, it will be compatible with a variety of GPUs, including those listed in our minimum system requirements. To address your specific question, Smart Zoi will run on GPUs listed in our minimum requirements, which include the NVIDIA RTX 2060 (8GB VRAM) and the AMD Radeon RX 5600 XT. This means that players with these GPUs, and others meeting the minimum specifications, will be able to use Smart Zoi without issue.

Additionally, Smart Zoi will be fully compatible with higher-end GPUs like the NVIDIA RTX 30 and 40 series, as well as those from the AMD Radeon RX 6000 series. This ensures that players using mid-range and high-end graphics cards will also benefit from the feature’s capabilities. While the feature may perform best on newer or more powerful GPUs, it is not exclusive to the latest hardware, such as the RTX 50 series. We understand the importance of making sure that players who meet the minimum system requirements can access Smart Zoi, and we want to reassure you that this will be the case. We aim to provide an exciting experience for as many players as possible, and we are confident that Smart Zoi will be accessible to a broad audience.

We also appreciate you sharing the video from the influencer discussing this topic. It’s always valuable to hear the community’s thoughts and concerns, and we hope this response helps clarify things for both the influencer and their viewers. Thank you again for your inquiry. If you have any further questions, please don't hesitate to reach out. Best regards, inZOI Support
Ultima modifica da Father Lamb; 4 apr, ore 14:27
Average american, it works trust my words. Cuda only, I checked the dll. I even tried RE all the calls to: DiretML, Zluda, hiblas, rocm, opencl. Been working on it for several days, I eventually stopped. There are vulkan functions in it, but the ue5 project doesn't have it enabled. Check the dll by yourself nvigi.plugin.gptkrafton.ggml.cuda.dll - inZOI\BlueClient\Plugins\NVIGIPlugin\Binaries\Win64 one of them at least. If you enable smart zoi from the config folder it enable the pop up in game, but can't generate anything, eventually you will seb basic stuff in the popup like ''I didn't write my diary today'' useless basic stuff, probably placeholders for tests.
это ты ему шпильки загнул
< >
Visualizzazione di 16-25 commenti su 25
Per pagina: 1530 50