Installer Steam
connexion
|
langue
简体中文 (chinois simplifié)
繁體中文 (chinois traditionnel)
日本語 (japonais)
한국어 (coréen)
ไทย (thaï)
Български (bulgare)
Čeština (tchèque)
Dansk (danois)
Deutsch (allemand)
English (anglais)
Español - España (espagnol castillan)
Español - Latinoamérica (espagnol d'Amérique latine)
Ελληνικά (grec)
Italiano (italien)
Bahasa Indonesia (indonésien)
Magyar (hongrois)
Nederlands (néerlandais)
Norsk (norvégien)
Polski (polonais)
Português (portugais du Portugal)
Português - Brasil (portugais du Brésil)
Română (roumain)
Русский (russe)
Suomi (finnois)
Svenska (suédois)
Türkçe (turc)
Tiếng Việt (vietnamien)
Українська (ukrainien)
Signaler un problème de traduction
I do understand the chances for X4 are near zero, but in the future?
I am not sold by the AI on your personal computer, but if in 10 years every PC has the equivalent of another 4090 card embedded in the CPU in AI cores?
Can the AI cores be used for useful stuff other than stealing more of your personal data?
I understand that Cuda and AI cores are tuned for particular calculations, and may not be as effective / capable of other calculation types.
I also understand that you can't just split a linear task.
While cloud computing already exists.
Go check X4 on GeforceNow
Yep, cloud gaming is a thing already and for games like this one it's actually not a bad idea. Cloud computing is also a thing, but it's nowhere near the same thing as wireless streaming of large volumes or real processing data cross platforms (that statement is so ridiculous on so many levels I love it).
I've made repeated reference to sequential or linear programming tasks versus parallel workloads.
Gaming, by its very nature, occurs in order. Therefore there are diminishing returns on "spreading workload over cores" - as a rule most games already multithread some basic things like audio, network, physics, AI and render. The more threads you spin off the more threads have to be resync'd into the main loop later on.
In terms of how/what this achieves now in practice - it works pretty well.
You already have a 100 core "PC" - your GPU. GPUs are excellent parallel processors. However it is worth noting that the GPU is pretty far from the CPU. The transfer speeds to and from the CPU to GPU are limited by the PCI bus - correspondingly any attempt to use an iPad will be bottlenecked by both distance and the speed of the interface. You can't use this peripheral like that.
It is also worth mentioning at this point that a lot of what we call 'job handling' or 'task handling' occurs under the hood in Windows - the game might multithread several tasks but Windows is the one deciding that the fastest mode is running it all on one or two cores on the die. Which is about right.
Now for offloading server-side workloads - well you would be better off with dedicated server hardware. But let's just assume a modern i7 instead (which is essentially what an ipad pro is anyway) - could we find a way to use this a server object.
Theoretically yes.
Technically speaking the game is presently running as a 'server' but we don't have client objects to connect to it remotely from another machine. Ventures might have something that permits this but I haven't seen much useful when reading the code.
Ideally, in say a very simple UE server client model, we would have our server application which is just running on houw dedicated box. We would want this to handle a lot of the different stuff like connecting players and creating lobbies then triggering the game.
The client would simply be responsible for just rendering what's in range and what's replicated to the client. We would absolutely see some improvements since we are offloading AI and another one I forgot just now, to a server. The caveat here is that our bonus from that will go down somewhat with the map being uncovered by the player.
Addressing maploag in a highly congested environment is however a UX issue. I can't speak much to UI/UX performance design - never had to performance manage my games (not least because they are vehicles for massive landmasses and nothing else). I have however seen it before - when players were modding in their own UI in Dual Universe (you can program stuff in DU on in game items using .lua, including your entire FCS and HUD.
The best HUD (in its first iteration) ran terribly at 30fps the whole time - because it could only update so fast from the tick function. This is probably want is happening iwth the map - too many positions and entities to smoothly update to the UI at 60fps. I'm not really a UX programmer (I code and do terrain stuff) you'd need to talk to one of those to drill down specifically into this.
It would depend on the nature of how multiplayer works and what kinds of controls are available over the session protocol. For example, if the venture model were to say specific "Connect to venture server host: 68.69.10.123" and we were able to say point the host machine at 127.0.0.1 on creation; then go over to our client and change our venture host ip address to 192.168.0.10 (our the address of the host on the local network); then
yes this should be possible.
Alternatively you could export all the models into Unreal, rewrite most of the scripts into blueprint, and simply reengineer the game in UE5. Then you can add on all sorts of extensions (planetside/seamless surface to atmo streaming, mass entity/crowd AI, virtual economy plugins) and probably get a more scalable solution that way.
This is of course just all thoughts I have had off the top of my head and no doubt errors will have crept in - but as a back of the envelope idea its okay.
The iPad will not function well as a server though, due to its profile and no doubt awful thermals. If you want to buy it simply for the toy aspect then do that - but in terms of actual computing it's a waste of silicon. Absolutely no point putting that much specs into a tablet - it's a display of conspicuous consumption. Why not just pour a barrel of oil into your local nature preserve instead?
This is why some systems show a low percentage of CPU cycles apart from a couple of very busy Cores.
The biggest release of CPU cycles will, IMHO, be when the Operating systems,Firmware and GPU can move most of the data independently of the CPU Cycles.
As long as the CPU is needed to control I/O such as caching User input; processing the input to the OS and then the game; Caching the result of same; processing the output from that cache to various element of the system ie RAM; RAM caches; Storage media; Comms; Display; Game Saves and many other intermediate I/O elements....we will get bottlenecks
Even splitting large chunks of data management off into other Cores will even potentialy add to the bottle necks as again CPU cycles are needed to manage all that moving data.
It's a bit like pouring Syrup or Ketchup into one or two bottles. With one bottle only the linkage between the hardware and the CPU needs a simple control [On/Off] When you add more bottles then you need more CPU Cycles to control the link to each bottle. Additionally the process of changing from one bottle to another will use up CPU Cycles to close one bottle; make the switch to another [and maybe choose one of several bottles]
Fast hardware and glue logic can make all these linear process appear almost parallel. But as long as the process is mostly linear there will still occur lags between input and output to screen or storage.
IMHO a Parallel solution WILL come, but we are not there yet and CPU intensive games will remain choked until the hardware reduces in cost. After all how many of us can afford MultiCPU PC's with many cores and an OS that shares more I/O with a parallel format. ie Supercomputers.
If you look at how differently GPU's perform dependent upon their I/O throughput [bandwidth] you will see that a 128 bit bus can outperform a 64 bit bus that has faster VRAM and more VRAM.
EDIT: Hopefully we will get the message across Anaryl. Your post was sat there when I posted this. Good post btw
You too! That happened with most of the thread.
I think ultimately the premise of offloading to a dedicated server is a valid one. But it's a question of implementation. Does X4's implementation support it? It looks like textbook OOP so I would assume that yes, it's potentially there.
Technologically there have been some impressive gains in the past few years, and I think we'll see dedicated 'AI chips' in the next few years. I don't think we'll see that much in gains for things like gaming AI though. The place where the gaming industry is going is really sinister; and very few studios have the means to attempt ambitious projects - made all the harder by the failure of all the notable scams lately.
There's already a mod for rest-http api connections - so the iPad could be used as a big MFD theoretically. Not sure how good Apple stuff is for that kind of thing due to the walled garden - personally would recommend Android over Apple anyway.
At the end of the day, this can't be that hard. For me the major blockers are always the art pipeline, still stuck on getting the blender tools to work. If I could get some ships into UE, I could at least prototype something - but another 'fun' part of coding is spending hours just getting various dependencies to work.
Employing at least a couple of very large data buses in parallel MAY add throughput headroom that keeps development growing until a real solution is cheap enough. In other words adding in at least one additional CPU to handle specific I/O maybe taking GPU output and handling the I/O for multiscreen; multi-res situations and only letting the main CPU see a processed result without adding CPU cycles from that main I/O Core. In other words taking any additional 'bottles' and doing all the cycling control apart from the resulting I/O
As 16k resolution approaches I'm sure the next generations of GPU's will soon be capable of the data processing of say 3x 16k screens. We should see the needed code creeping into Beta Firmware soonish, if not already.
Gamers might get the benefit of early Firmware not because they present a good market but simply because future development of hardware for military or Governmental usage requires more options on Parallel vs Linear.
We gamers represent an early adoption channel that brings a little money back to the development and creates a multifaceted test bed. We are an incentive to smaller operators such as Software Houses to produce games and software that continues to stretch the hardware. Just look at how Drone technology has so massively grown. Lasers now already offer fantastic point defence options.
Example technologies that we have seen and might actually want can be seen in some war games even now.
ie 3D visualisation both in Goggles and in Holographic form.
Mind or AI control of munitions
Remote control of troop movement
I'm sure some of you can think of other possibilities.
Processing speed is growing cheaper hence the massive power of our GPU's.
Perhaps the reason we don't get an external map offered is the CPU cycle cost of splitting that data and keeping it updated. However, maybe it will be feasible to have a second or third GPU do the mapping and maybe our main display will show a subset of THAT, possibly keeping CPU Cycles free on the main Cores yet using the GPU to do all the processing without the slowdown of Main Core CPU cycles handling all those primary calculations; farming out chunks of it to the GPU. Maybe the second CPU does everything it can and only updates the Main Cores once the I/O is finalised.
By shifting so much of the data control to the GPU and maybe a secondary CPU we might see reductions in the I/O needed by the main cores and perhaps the OS, thereby reducing current bottlenecks.
Perhaps a future OS would separate the Display output from all the other I/O such as Sound; inputs; Fan control etc.
That said, I have two GPUs (Intel and NVidia), and I think the Intel could handle the map alone, so your idea of kicking the map over to that GPU might work too.
Either way, I'd love to have my map view always open on a second screen, and maybe even have things like the info screen / encyclopedia on a third screen (though that might be pushing my luck). Since the map exists simultaneously to the 3D world view, it should be easy to do in theory.