Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Being slightly nitpicky, it is Vulkan, not VulCan, and V-sycnh is not a frame- limiter, it is the vertical refresh frequency. Monitors are designed (until recently) with a specific refresh rate at a given resolution, this is directly tied to how quickly the monitor can prepare and display a full frame (in other words the entire screen, for our purposes). A monitor with a vertical refresh rate of 60 Hz can prepare and display a frame 60 times every second. If you disable "V-synch" you are ignoring this limitation and feeding frame data to the buffer regardless of where in the process of updating and displaying a frame the monitor is, which can let the GPU produces many more frames every second, though the monitor can still only draw them at the same rate, but you introduce the possibility of "tearing" where a displayed image has a visible shift between the top and bottom parts of the image at a certain point (because the bottom part is the previously generated frame and the top part is the frame after). Your GPU actually still tends to draw more frames than the monitor can display, even when you have V-synch enabled (as you probably should, but it's your PC - do you) it just only ever presents a completed frame buffer to the monitor.
Oh. I just saw what the developers recommended for this game and looked it up in pcpartspicker, which linked to this: https://www.amazon.co.uk/dp/B07HWMDDMK?tag=pcp0f-21&linkCode=ogi&th=1&psc=1
So, how does one tell that this is "two generations behind"? And why would the developers recommend this if you can't buy it or it is out of date?
How would I tell if I was "experiencing low framerate"? I am guessing that FPS means "frames per second" here, like in cinema and TV. But I thought that the human cognitive system perceived anything beyond 24fps as smooth motion, and that that's what TV and cinema produces.
As for being a technician, I'm over the moon I no longer have to write batch and config files in DOS just to get sound :D So much easieron the commodore 64 back when!
By knowing anything about the topic at all? ;) (I realize you don't, and you know you don't, but that really is the answer). For nVidia, it is relatively easy - there were '10 series' cards in the fairly distant past (GTX 1060, GTX 1080, etc), that was followed by the '20 series' cards which also were the first "realtime raytracing" capable cards and got the new prefix 'RTX' (RTX 2050, RTX 2060, RTX 2080, etc). Then we have the '30 series' cards (I have one of these, an RTX 3090 Ti) with similar naming, and the current generation are the '40 series' cards like the RTX 4060, RTX 4080, and so on.
A whole bunch of reasons, why not? They are not telling you to buy that card, they are telling you the minimum performance level and feature set they recommend. In this case, that particular card is what the developer uses (or at least did use) on his own machine. It tells everyone where the bar is for "plays the way the developer intended" - if your card is equivalent or better, you're golden, if not, you may have to make some compromises but as long as your card is better than the minimum required you'll still get to play the game acceptably. Your system is neither of those, in fact, you don't even actually have a video card, you have what is known as "integrated graphics". Intel has worked hard, with good reason, to shed that label on their latest implementations, because the performance and features are far better than the old "intel Integrated Graphics" solutions of the past, but it is still actually an on-die integrated graphics solution with basically all the limitations that are inherent to that.
Actually, no. There is no magic number. It depends on the person, the images displayed, the lighting and even if you are tired, drugged, etc. With that said ... 60 Hz is a good baseline for the majority of the population, in the majority of situations. Nothing special about it being sixty, just that that was technically feasible and practical for reasons that get pointlessly complex and are not all that relevant. Just understand that it could as easily be 56 or 64.
30 Hz is acceptable for many people in a lot of situations, but very noticeable to many others (myself included) in almost all situations. When I was younger I used displays capable of at least 72 Hz, ideally 85 Hz or more, in the resolutions I was using. Take away is "it depends, to a large degree on the person".
As for cinema, nothing magical about 24 fps, but they are not even slightly comparable. Traditional cinematography "gets away" with so few frames per second because of several technical details, the most obvious of which is that all the images are blurred where movement occurs. If you look at film stills this will stand out to you immediately, I guarantee it.
Well, TBF, the "store page" for this game does say
which after I found the demo wouldn't play and saw the discussions here, I took to be recommending that particular card.
So, how does one go about determining what the key performance attributes of that card are, and then looking at currently available cards to see which match them?
Frames per second has already been mentioned as important, so I assume if I find a tech spec of that card it will somewhere give me a number for that (and I assume that'll be something over 60), but is that all there is to it? I'm guessing there are a load of other performance attributes that need to be compared too?
The numbers *sort* of have some logic ... but as with all branding, only to the extent that it suits the manufacturer and they break it when it suits them.
Broadly speaking though, for nVidia the xx50 part is entry level (long ago there used to be cards at the 40, 30, 20 and even 10 mark ... but we digress) and not every generation has even had a desktop model at this point, so you probably will only see it on laptops. The xx60 part is 'mid-range'; this should be the default choice for most people, especially if on a budget, to get good performance without spending more than necessary. The specifics vary with the actual model and the marketing, though. The xx80 part is 'high end'; the best performance you can get before you start paying very high prices, but most certainly not a budget option. Depending on the specifics this can actually be the "best bang for bucks" model in the range, even though the xx60 part is really supposed to be that, historically. Again, see what I said about "when it suits them". The xx90 part is the 'enthusiast' tier. This is for people like me who make enough use of their PC, with demanding graphical apps (okay, games, let's be honest here) and have the budget to get the best card money can buy which doesn't actually need a bank loan (or a bank job). To complicate it some more, though, there are 'super' and 'Ti' variants. Super is usually a "refresh" where they take the original and improve it across the board, in terms of gaming performance, and release at a competitive price (ideally). Ti is often released simultaneously, but sometime shortly after the main part, and usually is only applied to the higher end cards. It basically signifies "the best we can do with this card without changing the hardware to the next tier. In practice, often the Ti variant is so much better that you would be unwise to buy the base, other times it's a marginal improvement, but for a much higher price. Again "depends what the manufacturer wants to do" applies here. Basic rule, obviously, is 'bigger number is better'. RTX means it is 'realtime raytracing' capable, which all the latest cards are, but I mention because if you pick something up which is a GTX you should understand that you will not be able to use 'RT'. Whether that matters to you or not is something you'd need to research a little (it did to me, which is why I bought a 20xx card even though I had a very capable earlier card).
'Recommended Spec' just means you need to meet or exceed that specification with your current machine to get the best experience. They date, because unless the demands of the application change (as does happen) they should not be altered from the original posted specs. This also means you can get a good idea of the time when the game was released based on the targets, and specifically that you can answer questions like "will it require a card that has <this feature> or not?"
You will not find fps listed anywhere, and to a large extent it actually doesn't matter. If your card is adequate for running that piece of software it will be able to output frames quickly enough to match the monitor refresh rate. If you are coming in at less than that, your card is not adequate with those settings and resolution. That said, if you are usually getting 40 - 60 fps, you'll likely be fine with a game like this and indeed many others. If you were trying to play competitive shooting games that would likely be unacceptable.
How do I translate that card "name" into a specification? When I look at web pages for an Nvidia 2080Ti I see a gazillion different attributes- how do I know which of those are important. As you say, the "frame per second" capability of the card doesn't seem to get mentioned, which I find odd given how important it seems to be to people elsewhere in this thread. I do see stuff about CUDA cores, GB of GDDR6 memory, operating frequency, TMUs, ROPS, bus width, nm process, die area, millions of transistors, ray tracing cores, number of shaders, bus interface, streaming microprocessors, etc,
Or is it as simple as looking at the generation/model number in the card name that you described? Does every 30xx card "exceed" every 20xx card, so I could buy a 3050 card knowing it will outperform the 2080 Ti?
How do you get the specs? Wikipedia, basically. How do you get rankings? There are many possibilities here, but pick an online source and look at their "graphics card rankings". "Tom's Hardware" is an excellent place to start and to educate yourself somewhat on the topic.
In theory, ideally, aye. In practice, not necessarily, no. However, every 30 series card has hardware advantages that a 20 series card does not. Whether those matter to you or not is something you have to decide, but this is where getting knowledge matters. For your specific situation, my actual advice is "buy an RTX 4060 if you can afford it". I realize not everyone has the disposable income for purchases like that, but it is genuinely the best choice for current gaming at a reasonable level, with the expectation that you won't replace it for perhaps 5 years, maybe even 10 years, depending on what you are playing at that time. That card can currently be purchased for about $300 in the US, maybe less with the right deal. I think you're in the UK, so you may end paying the same or not much less in pounds (it sucks but they have got away with that for years), regardless, that is a more capable card overall, by some margin, than the 2080 you have been discussing. It's perfectly capable of running this tech demo and the upcoming game that it will become.
https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html
As you can see when you look at that image, the AMD RX 7600 is also a solid choice and is available for less, generally. Nothing wrong with AMD but they have historically been a little less compatible at times, and sometimes have had driver issues. I doubt you would have any issues but a majority of gamers still use nVidia for a combination of reasons, including the above. One of either the RX 7600 or RTX 4060 is where you should be looking, though.
The comparison between say, 20xx and 30xx series cards would be better at the individual "xx" level. In general, you can infer that a 3060 is better than a 2060. But, depending on the game, a 2070 may match the performance of a 3060.
Using a site like https://gpu.userbenchmark.com/Compare/Nvidia-RTX-3060-vs-Nvidia-RTX-2070/4105vs4029 is sometimes a useful comparison tool. From this particular example, the overall score gives the edge slightly to the 2070 over the 3060. But you can also use it to compare between brands, so you can check an AMD card vs an Nvidia card.
The newer generation cards usually have newer, more capable feature sets for software developers to take advantage of. The 20xx cards have DLSS (Dynamic Learning Super Sampling), while the 30xx cards have DLSS2 and the 40xx series have DLSS3 (2nd and 3rd generation DLSS). And while all 3 series are capable of RT, each successive generation is more efficient at it.
Not picking on you, or taking issue with what you say, but "User Benchmark" is absolutely the last place anyone should go, and certainly not where we should send newcomers. It's utter trash for reason that don't belong on a Steam thread but can be found with a quick google. Avoid it like the plague.
EDIT: It's not actually bad for the benchmarking software, that can give you useful data about your system (like noticing that a part is underperforming because you forgot to properly adjust firmware settings). It is worse than useless for making purchasing decisions, particularly with regards to CPUs. It's less bad for GPUs, but there are simply better resources; I'd recommend Tom's Hardware, which has been at this game since I was first paying attention to this stuff and building my own systems.
Fair enough. It was just the first site I pulled up that does direct comparison between cards. It also has a breakdown of a bunch of different data points for the cards. Probably could have found a better site if I spent a little more time looking.
Aye, I don't assume any malice or anything, it just so happens that site is both notoriously bad, and incredibly high in search results.