Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
When looking at "what's the biggest int in the range that's all-1s in binary" it is always a power of 2 minus 1 (otherwise, it would be a 1 followed by zeros).
So, we have:
256 = 1,0000,0000 (or 0x100)
255 = 0,1111,1111 (or 0xFF)
254 = 0,1111,1110 (or 0xFE)
128 = 0,1000,0000 (or 0x80)
127 = 0,0111,1111 (or 0x7F)
So what's probably happening here, is to evenly split 1 less than a power of two (which will be an odd number) into two teams, you actually have to subtract 1 to make it even. And so, we get (255-1)/2 = 127.
I'm sure they'd love to have 255 total players, but who would get the extra player?
I supposed they COULD go to 16-bit integers... but at some point you just start running into issues as that ends up doubling the datarate needed. I'm sure that 8-bit was a limitation imposed in order to actually keep the datapackets small, and with so many players? It kinda makes sense.
I don't think doubling the datarate for "player count" is that big. I mean if it only meant to allow 128vs128 you only get that data once or twice in a game.
Probably they do off-by-one error and are not really prioritizing to fix it.
It isn't just once or twice a game, though. Any/every action in the game will have to say which player it came from (and what action it is). They might (probably are) using 8-bit fields for everything at the network level, essentially removing a lot of leading 0s needing to be transmitted on every packet to/from players.
Changing to 16-bit (or higher) could actually introduce more latency.
Would be neat to have the developer chime in here, though... But I'm betting things are almost entirely encoded in 8-bit fields on the network side.
Does not really make sense, it still is 256 values. If anything I would have thought of it as a networking joke where 0 is reserved as the net address and 255 the broadcast
An array with user[0] = GoldRobot and user[1-255]=null has one player, an array with user[0-255]=null has zero players. The index of the array is only 8bit, but that's it, that's just the index, the index itself does not represent the number of players so it doesn't need to have 0 be different than empty. If the "number of players" was passed around every other clock cycle then yea you'd need to optimize it since you can't represent 0 and 256 in the same 8bit number and would need at least one more bit, but that shouldn't be happening anywhere in the game or server or network communication. CPUs are processing 3-5 billion cycles per second, pulling up the number of players is irrelevant when it takes like 20 cycles and isn't used in the high performance core gameplay or core network code. When a shader is running the same code on 8 million pixels a 144 times a second then you need to optimize, the "number of players" is not included in things like this.