Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
If Steam is using more than 4GB of RAM something is wrong. We're not playing "My First SQL Server"
So it is a valid request.
You only need to have enough memory for all the stuff that's in flight, which is maximally bounded by hard-disc bandwidth and the time it takes to get from one side of the production line to the other. Fastest SSD I can find at the moment pushes 500 MB/s, so you'd only blow through 2 GB if data spent 4 seconds winding its way through the Steam client; 4 seconds is an eternity, something like 8 billion clock cycles.
Anyways in practice, it'll more usually be bound by your download rate, which is a heck of a lot lower.
Now sure, you need 64-bit numbers to keep track of the whole patch, yeh. But you don't need a 64-bit process to have 64-bit numbers.
Slow != impossible.
Let's say we use 16 parallel threads to decompress an update archive. Each thread works in memory, using 128MB buffers. Tell me how the whole process will fit or how the HDD speed will be the limiting factor.
Anyways, for decompression your limiting factor is download rate, because it's downloaded data that comes in compressed. If you're downloading at 10 MB/s, you only need to decompress at 10 MB/s.
16 threads with each having a 128 MB buffer doesn't make any sense. If you have buffers that large, most of those threads are just going to be sitting on their hands for most of their time while their buffers fill. It's pointless.
What you ideally want is something like a producer-consumer queue, while one set of threads is responsible for stuffing downloads into relatively small buffers, then it punts those over to a set of threads that decompress the data.