X4: Foundations

X4: Foundations

Voir les stats:
offloading the computation work.
Looking at the new Ipad pro...

What would it take for X to produce an app to offload some of the computing burden to an Ipad pro / whatever on the local network?
I understand that for Egosoft to do it would require a massive change to their engine, but wondering if there is a similar app for offloading other applications.
Need to have a look at the speed. I think Apple have been a little devious in downplaying the fact that their 10 cpu pro is actually only 3 performance and 6 efficiency unless you go for the 1 /2 TB higher end models.
If you found a 100 core intel PC would the game scale well and use the cores?

I guess this would only help with other sector calculation, and not be much assistance in mass combat in busy sectors.
< >
Affichage des commentaires 1 à 15 sur 28
I could go on a lengthy rant here how that would never even work from the technical perspective, but let me just say this: not happening.
would that be because it all takes the same data, time delay, data security or the difficulty in coding?
I do understand the chances for X4 are near zero, but in the future?
I am not sold by the AI on your personal computer, but if in 10 years every PC has the equivalent of another 4090 card embedded in the CPU in AI cores?

Can the AI cores be used for useful stuff other than stealing more of your personal data?
I understand that Cuda and AI cores are tuned for particular calculations, and may not be as effective / capable of other calculation types.

I also understand that you can't just split a linear task.
That, among other things? The latency alone would make this pretty much impossible. You're basically asking for an app for a technology that doesn't exist, nor is even in a realm of possibility in the near future in my opinion.
Linking hardware on a same platform was a failure (SLI/CF) and you want this cross-platform.
While cloud computing already exists.

Go check X4 on GeforceNow
aY227 a écrit :
Linking hardware on a same platform was a failure (SLI/CF) and you want this cross-platform.
While cloud computing already exists.

Go check X4 on GeforceNow

Yep, cloud gaming is a thing already and for games like this one it's actually not a bad idea. Cloud computing is also a thing, but it's nowhere near the same thing as wireless streaming of large volumes or real processing data cross platforms (that statement is so ridiculous on so many levels I love it).
Simple answer is: No.

I've made repeated reference to sequential or linear programming tasks versus parallel workloads.

Gaming, by its very nature, occurs in order. Therefore there are diminishing returns on "spreading workload over cores" - as a rule most games already multithread some basic things like audio, network, physics, AI and render. The more threads you spin off the more threads have to be resync'd into the main loop later on.

In terms of how/what this achieves now in practice - it works pretty well.

You already have a 100 core "PC" - your GPU. GPUs are excellent parallel processors. However it is worth noting that the GPU is pretty far from the CPU. The transfer speeds to and from the CPU to GPU are limited by the PCI bus - correspondingly any attempt to use an iPad will be bottlenecked by both distance and the speed of the interface. You can't use this peripheral like that.

It is also worth mentioning at this point that a lot of what we call 'job handling' or 'task handling' occurs under the hood in Windows - the game might multithread several tasks but Windows is the one deciding that the fastest mode is running it all on one or two cores on the die. Which is about right.

Now for offloading server-side workloads - well you would be better off with dedicated server hardware. But let's just assume a modern i7 instead (which is essentially what an ipad pro is anyway) - could we find a way to use this a server object.

Theoretically yes.

Technically speaking the game is presently running as a 'server' but we don't have client objects to connect to it remotely from another machine. Ventures might have something that permits this but I haven't seen much useful when reading the code.

Ideally, in say a very simple UE server client model, we would have our server application which is just running on houw dedicated box. We would want this to handle a lot of the different stuff like connecting players and creating lobbies then triggering the game.

The client would simply be responsible for just rendering what's in range and what's replicated to the client. We would absolutely see some improvements since we are offloading AI and another one I forgot just now, to a server. The caveat here is that our bonus from that will go down somewhat with the map being uncovered by the player.

Addressing maploag in a highly congested environment is however a UX issue. I can't speak much to UI/UX performance design - never had to performance manage my games (not least because they are vehicles for massive landmasses and nothing else). I have however seen it before - when players were modding in their own UI in Dual Universe (you can program stuff in DU on in game items using .lua, including your entire FCS and HUD.

The best HUD (in its first iteration) ran terribly at 30fps the whole time - because it could only update so fast from the tick function. This is probably want is happening iwth the map - too many positions and entities to smoothly update to the UI at 60fps. I'm not really a UX programmer (I code and do terrain stuff) you'd need to talk to one of those to drill down specifically into this.


It would depend on the nature of how multiplayer works and what kinds of controls are available over the session protocol. For example, if the venture model were to say specific "Connect to venture server host: 68.69.10.123" and we were able to say point the host machine at 127.0.0.1 on creation; then go over to our client and change our venture host ip address to 192.168.0.10 (our the address of the host on the local network); then
yes this should be possible.

Alternatively you could export all the models into Unreal, rewrite most of the scripts into blueprint, and simply reengineer the game in UE5. Then you can add on all sorts of extensions (planetside/seamless surface to atmo streaming, mass entity/crowd AI, virtual economy plugins) and probably get a more scalable solution that way.

This is of course just all thoughts I have had off the top of my head and no doubt errors will have crept in - but as a back of the envelope idea its okay.

The iPad will not function well as a server though, due to its profile and no doubt awful thermals. If you want to buy it simply for the toy aspect then do that - but in terms of actual computing it's a waste of silicon. Absolutely no point putting that much specs into a tablet - it's a display of conspicuous consumption. Why not just pour a barrel of oil into your local nature preserve instead?
Cloud gaming or a fast offload option cannot improve the bottlenecking in the system I/O by very much.

This is why some systems show a low percentage of CPU cycles apart from a couple of very busy Cores.

The biggest release of CPU cycles will, IMHO, be when the Operating systems,Firmware and GPU can move most of the data independently of the CPU Cycles.
As long as the CPU is needed to control I/O such as caching User input; processing the input to the OS and then the game; Caching the result of same; processing the output from that cache to various element of the system ie RAM; RAM caches; Storage media; Comms; Display; Game Saves and many other intermediate I/O elements....we will get bottlenecks

Even splitting large chunks of data management off into other Cores will even potentialy add to the bottle necks as again CPU cycles are needed to manage all that moving data.

It's a bit like pouring Syrup or Ketchup into one or two bottles. With one bottle only the linkage between the hardware and the CPU needs a simple control [On/Off] When you add more bottles then you need more CPU Cycles to control the link to each bottle. Additionally the process of changing from one bottle to another will use up CPU Cycles to close one bottle; make the switch to another [and maybe choose one of several bottles]

Fast hardware and glue logic can make all these linear process appear almost parallel. But as long as the process is mostly linear there will still occur lags between input and output to screen or storage.

IMHO a Parallel solution WILL come, but we are not there yet and CPU intensive games will remain choked until the hardware reduces in cost. After all how many of us can afford MultiCPU PC's with many cores and an OS that shares more I/O with a parallel format. ie Supercomputers.

If you look at how differently GPU's perform dependent upon their I/O throughput [bandwidth] you will see that a 128 bit bus can outperform a 64 bit bus that has faster VRAM and more VRAM.

EDIT: Hopefully we will get the message across Anaryl. Your post was sat there when I posted this. Good post btw
Dernière modification de mmmcheesywaffles; 9 mai 2024 à 3h41
mmmcheesywaffles a écrit :
Cloud gaming or a fast offload option cannot improve the bottlenecking in the system I/O by very much.



EDIT: Hopefully we will get the message across Anaryl. Your post was sat there when I posted this. Good post btw

You too! That happened with most of the thread.

I think ultimately the premise of offloading to a dedicated server is a valid one. But it's a question of implementation. Does X4's implementation support it? It looks like textbook OOP so I would assume that yes, it's potentially there.

Technologically there have been some impressive gains in the past few years, and I think we'll see dedicated 'AI chips' in the next few years. I don't think we'll see that much in gains for things like gaming AI though. The place where the gaming industry is going is really sinister; and very few studios have the means to attempt ambitious projects - made all the harder by the failure of all the notable scams lately.

There's already a mod for rest-http api connections - so the iPad could be used as a big MFD theoretically. Not sure how good Apple stuff is for that kind of thing due to the walled garden - personally would recommend Android over Apple anyway.

At the end of the day, this can't be that hard. For me the major blockers are always the art pipeline, still stuck on getting the blender tools to work. If I could get some ships into UE, I could at least prototype something - but another 'fun' part of coding is spending hours just getting various dependencies to work.
The rewrite that would be required would be pretty much starting from scratch. It's not happening. The entire universe is based on scripts running on triggers and loops examining the world state and responding to it and updating it. You would have to copy the world state to every node you add and somehow keep them in sync with each other.
I sure would like to "offload" the map screen to a second monitor! Doesn't have to be an iPad, though I believe there is an app that turns the iPad into a second monitor, so why not? This game begs, it DEMANDS dual (or even multi) monitor support. That would be amazing!
Personally I suspect that NVidia may combine a chunk of their GPU development with developments on the OS and Motherboard side. that is probably hidden from mere mortals such as us, for now.

Employing at least a couple of very large data buses in parallel MAY add throughput headroom that keeps development growing until a real solution is cheap enough. In other words adding in at least one additional CPU to handle specific I/O maybe taking GPU output and handling the I/O for multiscreen; multi-res situations and only letting the main CPU see a processed result without adding CPU cycles from that main I/O Core. In other words taking any additional 'bottles' and doing all the cycling control apart from the resulting I/O

As 16k resolution approaches I'm sure the next generations of GPU's will soon be capable of the data processing of say 3x 16k screens. We should see the needed code creeping into Beta Firmware soonish, if not already.

Gamers might get the benefit of early Firmware not because they present a good market but simply because future development of hardware for military or Governmental usage requires more options on Parallel vs Linear.

We gamers represent an early adoption channel that brings a little money back to the development and creates a multifaceted test bed. We are an incentive to smaller operators such as Software Houses to produce games and software that continues to stretch the hardware. Just look at how Drone technology has so massively grown. Lasers now already offer fantastic point defence options.

Example technologies that we have seen and might actually want can be seen in some war games even now.

ie 3D visualisation both in Goggles and in Holographic form.
Mind or AI control of munitions
Remote control of troop movement

I'm sure some of you can think of other possibilities.

Processing speed is growing cheaper hence the massive power of our GPU's.
SinisterSlay a écrit :
The rewrite that would be required would be pretty much starting from scratch. It's not happening. The entire universe is based on scripts running on triggers and loops examining the world state and responding to it and updating it. You would have to copy the world state to every node you add and somehow keep them in sync with each other.
Hmmm, not so sure that some element of future code is not already implemented in the latest Ego Engine. They tend to be well ahead of hardware and were it not for the enforced hiatus caused by Covid, we might already have seen much higher throughput implemented in our hardware. Consider that the game can only offer speeds the hardware supports.

Captain Canard a écrit :
I sure would like to "offload" the map screen to a second monitor! Doesn't have to be an iPad, though I believe there is an app that turns the iPad into a second monitor, so why not? This game begs, it DEMANDS dual (or even multi) monitor support. That would be amazing!
Perhaps the reason we don't get an external map offered is the CPU cycle cost of splitting that data and keeping it updated. However, maybe it will be feasible to have a second or third GPU do the mapping and maybe our main display will show a subset of THAT, possibly keeping CPU Cycles free on the main Cores yet using the GPU to do all the processing without the slowdown of Main Core CPU cycles handling all those primary calculations; farming out chunks of it to the GPU. Maybe the second CPU does everything it can and only updates the Main Cores once the I/O is finalised.

By shifting so much of the data control to the GPU and maybe a secondary CPU we might see reductions in the I/O needed by the main cores and perhaps the OS, thereby reducing current bottlenecks.

Perhaps a future OS would separate the Display output from all the other I/O such as Sound; inputs; Fan control etc.
mmmcheesywaffles a écrit :
Perhaps the reason we don't get an external map offered is the CPU cycle cost of splitting that data and keeping it updated. However, maybe it will be feasible to have a second or third GPU do the mapping and maybe our main display will show a subset of THAT, possibly keeping CPU Cycles free on the main Cores yet using the GPU to do all the processing without the slowdown of Main Core CPU cycles handling all those primary calculations; farming out chunks of it to the GPU. Maybe the second CPU does everything it can and only updates the Main Cores once the I/O is finalised.
It amazes me that X4 purposefully draws the map view on top of the regular 3D view, rather than replace it. I didn't even notice at first, because the opacity of the map is so high, but when looking at the map, you can see the "real world" continue to go on behind it, so it's not like the GPU gets a break when the map is open. Since the GPU is drawing all this stuff anyway, why not move some of it to a second screen? I'm assuming the map is its own "layer" in GPU RAM, so I don't think it would be demanding to move that to a separate screen. In fact, it might relieve some of the burden since X4 would no longer need to alpha-blend the map with the 3D world.

That said, I have two GPUs (Intel and NVidia), and I think the Intel could handle the map alone, so your idea of kicking the map over to that GPU might work too.

Either way, I'd love to have my map view always open on a second screen, and maybe even have things like the info screen / encyclopedia on a third screen (though that might be pushing my luck). Since the map exists simultaneously to the 3D world view, it should be easy to do in theory.
ps - my theory on why the 3D world is drawn even when the map is open is that X4 relies on the 3D engine (which relies on the GPU) for IS calculations. Closing the 3D world to show only the map would result in everything being OOS for that duration. This is my theory, anyway.
Captain Canard a écrit :
mmmcheesywaffles a écrit :
Perhaps the reason we don't get an external map offered is the CPU cycle cost of splitting that data and keeping it updated. However, maybe it will be feasible to have a second or third GPU do the mapping and maybe our main display will show a subset of THAT, possibly keeping CPU Cycles free on the main Cores yet using the GPU to do all the processing without the slowdown of Main Core CPU cycles handling all those primary calculations; farming out chunks of it to the GPU. Maybe the second CPU does everything it can and only updates the Main Cores once the I/O is finalised.
It amazes me that X4 purposefully draws the map view on top of the regular 3D view, rather than replace it. I didn't even notice at first, because the opacity of the map is so high, but when looking at the map, you can see the "real world" continue to go on behind it, so it's not like the GPU gets a break when the map is open. Since the GPU is drawing all this stuff anyway, why not move some of it to a second screen? I'm assuming the map is its own "layer" in GPU RAM, so I don't think it would be demanding to move that to a separate screen. In fact, it might relieve some of the burden since X4 would no longer need to alpha-blend the map with the 3D world.

That said, I have two GPUs (Intel and NVidia), and I think the Intel could handle the map alone, so your idea of kicking the map over to that GPU might work too.

Either way, I'd love to have my map view always open on a second screen, and maybe even have things like the info screen / encyclopedia on a third screen (though that might be pushing my luck). Since the map exists simultaneously to the 3D world view, it should be easy to do in theory.
Sadly unless the OS implements the shared GPU it is not going to help. Nor will it reduce the needed I/O cpu cycles. With current technology I do think it will add to the bottlenecking. However, from what you say maybe we are on the right track; that Ego is ahead of farming out more of the CPU cycle load to other hardware once the OS offers it
< >
Affichage des commentaires 1 à 15 sur 28
Par page : 1530 50

Posté le 9 mai 2024 à 2h01
Messages : 28