Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
-----Original Message-----
From: Kevin Ottalini [EMAIL PROTECTED]
Sent: Wednesday, May 17, 2006 11:01 AM
To: hlds@list.valvesoftware.com
Subject: Re: [hlds] more then 1000fps at HLDS
HLDS (HL1 servers) can easily and with little burden run at either ~500 fps
or ~1000 fps. There is no control over the actual maximum FPS since it is a
motherboard chipset related issue.
This is controlled by the "sys_ticrate" CVAR so the max setting is:
sys_ticrate 1000
Win32 servers will also need to run some sort of high-resolution timer
(please see other mail threads about this).
We are only talking about HLDS here (HL1 servers). Source (SRCDS) servers
are quite different and (at the moment) appear to run the best at their
default settings.
This is not really FPS in the sense of visual FPS, but rather how often the
server will process the available event information (take a "snapshot") and
if needed send an update to clients that need updates. The more updates the
server sends out the more bandwidth the server will use on the uplink.
Clients can receive a maximum of 100 updates per second regardless of the
server sys_ticrate setting.
A client getting a server update is not the same thing as the video FPS that
the client is actually viewing.
The client graphics FPS, which for clients is controlled by the scene and
event complexity and the "fps_max" CVAR could indeed be set to fps_max 1000
but anything above 100 is quite silly. Again, this "viewing FPS" has
nothing to do with the server sys_ticrate setting.
The client has a CVAR that tells the server how often to send updates, this
is the cl_updaterate CVAR. cl_updaterate 100 is the maximum (fastest)
setting which the server may or may not allow. The server can limit the
client maximum via the sv_maxupdaterate CVAR.
Again, this has nothing to do with the client's VISUAL FPS.
OK, so why would a server operator want to run his/her server at sys_ticrate
1000?
In the case of HL1 servers only, running a faster ticrate on the server can
slightly improve the apparent client latency (sometimes called ping, but
ping is a little different). If the server is running sys_ticrate 100 then
there is a 10ms interval between server snapshots that can be sent to
clients. If a client has an 80ms ping distance from the server (real ping
this time) then the maximum latency is 80ms (ping) + 10ms (snapshot rate) or
90ms (latency).
If the same server is running at sys_ticrate 1000, then the snapshot
interval is only 1ms, so that same player will only see an 81ms latency.
Is a 9 ms savings important during game play? Probably not, although there
are internet players that claim to be able to feel the difference. In a LAN
setting this may be different, 10ms extra may be 10X what the ping is on a
LAN (but still, is this important? probably not).
Running an HLDS server at a higher sys_ticrate should have the overall
effect of keeping what players see on that server more accurate. This
appears to be a real and valuable effect at the cost of much higher CPU
utilization.
The real reason that a server operator might want to run his HLDS server at
sys_ticrate 1000 though is that it gives the server the ability to send
updates to individual clients on a more timely basis. Again, this is not
more updates, just updates that don't have to wait very long for the next
server snapshot to happen.
This has the overall effect on the server of spreading out client updates so
they don't all happen for all clients at the same time. This can slightly
lower the demand on the server uplink and might help the server to run a
little smoother.
Extensive testing on my HLDM server resulted in the conclusion that running
sys_ticrate 1000 actually allowed me to add one additional player slot (out
of 10 total) and the server had a much tighter "feel" to events with a
slight improvement in accuracy.
Of course, running sys_ticrate 1000 also took my average CPU utilization for
a 10-player server from around 3% to around 40% for some maps.
Even my old 800MHz Intel P3 server was able to run sys_ticrate 1000, the
real question is are you overloading your server CPU? This is a function of
the number of players, the map you are running and the sys_ticrate setting.
If your CPU is running more the 50% with sys_ticrate 1000 then decrease the
sys_ticrate to 500.
For testing purposes, use the Server GUI (don't use -console) and look at
the utilization graph.
qUiCkSiLvEr
But keep in mind I run my servers from a VPS (virtual private server) It is rented and i do not have to worry about bandwidth to much . The fast downloads for the game materials is redirected / run from a seporate ftp/http webhost. This keeps the game servers from feeling choppy when people connect and disconnect.
A booster for hlds can help in most cases. Because they usually come with thier own high-res timers.
fyi
The maximum ticrate a sever can produce is deterined by the motherboard chipset being used.
here is a server rate calculator.
http://www.reece-eu.net/drekrates.php