Steam telepítése
belépés
|
nyelv
简体中文 (egyszerűsített kínai)
繁體中文 (hagyományos kínai)
日本語 (japán)
한국어 (koreai)
ไทย (thai)
Български (bolgár)
Čeština (cseh)
Dansk (dán)
Deutsch (német)
English (angol)
Español - España (spanyolországi spanyol)
Español - Latinoamérica (latin-amerikai spanyol)
Ελληνικά (görög)
Français (francia)
Italiano (olasz)
Bahasa Indonesia (indonéz)
Nederlands (holland)
Norsk (norvég)
Polski (lengyel)
Português (portugáliai portugál)
Português - Brasil (brazíliai portugál)
Română (román)
Русский (orosz)
Suomi (finn)
Svenska (svéd)
Türkçe (török)
Tiếng Việt (vietnámi)
Українська (ukrán)
Fordítási probléma jelentése
Much obliged, HooksGURU.
If I'm understanding your example and statement on termporal filtering correctly, then unlike dynamic resolution scaling, you approve of sensible use of temporal filtering with the option of having a wider range of discrete dynamic graphics variables to accommodate reasonably weaker (but not blatantly outdated) hardware. Am I following you correctly on this count?
That's 100% correct!
Quite astute, even with my rough outline you understood it. Thank you for taking the time to read through all of that.
I think the biggest tragedy in this thread has been the knee-jerk hysteria compounded by language issues. It's important to know what each of us is talking about, and sometimes, clarification becomes vital as people start assuming one thing when what is being discussed is something quite different.
If I may indulge you in a bit of a digression by using something as an example from the last page:
The terms 'high end', 'mid range' and 'low end' get used far too liberally in situations where it becomes difficult to even determine what any of them actually mean in literal terms. In this thread, after reading pages upon pages of the 'just upgrade' mantra, I came to realise that many consider upper 600 and 700 series cards to be quite decent. I would agree. If this is about being clear on standards, then something like a GTX 970, which still represents the upper 30% of many premium benchmark results, doesn't appear to translate to something that I'd call 'mid-range', yet that's what people call it anyway, often out of a convenience to make a point. I'd call it a modest high-end card in terms of price and capability, but even that term I'm sure, will be subject to dispute.
Now between the Xbox One and the PlayStation 4, we are looking at a slightly customised and scaled-back AMD 7870 architecture with some additional vRAM. The CPU is also not very noteworthy. Furthermore, as I understand it, even the PlayStation 4 Neo will make it to around GTX 970 grade performance. (Please feel free to correct on these counts.)
It would seem that proportionally powerful, or even more powerful PC hardware, but not quite up to GTX 960 or 970 grade, seem to struggle on producing that consistent output in this particular game DOOM (2016). Again, please do correct if I have the wrong impression from what I'm seeing on certain threads. I would think that an entry-level, high-end Kepler card with adequate vRAM should roughly match and exceed what we see on consoles, and in many other new titles, this is still the case, no doubt.
As such, I'm of the view that these older cards, that are still a good deal stronger than what we see in the consoles, should still have a chance at producing a decent experience in games like DOOM (2016) where requirements appear a bit more stringent for the PC. That is, we should have accommodating provisions that allow at least these cards -- that are not new and great, but not weak and obsolete, either -- to do their best both in terms of visual fidelity and smooth performance.
Do you think that this stance/view is unreasonable?
Many who will read this will assume that I am somehow claiming that GT 640 users should have no need to upgrade. However, this isn't the case. I think that based on the Steam Database, many people could benefit from upgrading (the wide majority), but I don't think that onus should be stressed on those who still have hardware that is more capable than what we see on consoles, but not quite the absolute latest and greatest.
There are several roles here that obscure the clarity in which PC software in contrast to that of console software is interpreted by the average consumer/gamer.
1) Consoles natively use software "tricks" to reduce the system resource allocation, and system hardware requirements of the softwre in target. This is due to a lower resource pool, and lower hardware throughput capability.
Example: Rainbow Six Siege; a title released on both consoles and PCs alike. The difference between resolutions, technology used, and overall clarity of the image itself, differ greatly between PC and console. Even when attempts are made to create an identical software configuration between platforms, the variables of systems, and sub-systems are on different spectrums of file contents used for rendering instruction. This is the case with many PC titles, even those of "console port" integrity. There is a base difference in software features, and a base difference in expanable options for hardware to utilize by the end-user's desire.
2) The functions of hardware from software's instruction, and the resoruce allocation of said software differ greatly. Items such as pre-cache, loading of a scene's geometry, or rendering frames ahead via the GPU's bandwidth, are all native hardware functions that use a more complex system on the PC platform. This is due mostly in part of having higher bandwidth, higher capacity of memory allocation (assisted by virtual memory), and CPU technology that transmits information to the GPU and supported resources much faster than the console is capable of. This is why the majority of the time, software on the consoles still requires sectors of data from an external disc source (Blu-Ray, DVD discs).
Example: File systems and sub-system components are stored on a Solid State Drive via the installation path on a PC's directory. These systems are then read and loaded from the file cache; portions of these files are then spilt between the data on the storage device, central processing unit, DIMMs, and graphics processing unit's resoruces. Resulting in a faster load time, smoother resource distribution, and interpretation of code between the CPU & GPU.
Whereas a console requires data from two sources, internal "small data" via the internal storage device, and information stored on the external disc. The file systems and sub-systems are read similar to that of the PC platform, only lacking in the bandwidth and hardware resources for "immediate" allocation and utilization.
Point being, more hardware is required on the PC platform to accomplish more of the same done on consoles. As a result, it produces, faster, smoother, and higher quality software systems and sub-systems as designated by the developer. Which in turn results in higher fidelity and "uncomrpessed" assets, textures, shaders, lighting ect., all of which are used in gaming software.
As always, higher resoruces and more powerful hardware will create a richer enviornment for the software itself to utilize. More times than not, if software is demanding more resources that creates a more cumbersome experience on the PC, one or more system components is the culprit. Thus the statement "upgrade the hardware." Overhead is benficial not just for "well optimized" PC software, but also to aid in "filling in the gaps" with poorly executed hardware utilization software (poor porting). There is no such thing as overkill in terms of PC hardware; there is however the issue of lacking in resources for the software to utilize.
The long and short of it is, software on the PC is intelligently created with the idea "there is more to work with here." Whereas, the console software is designed with the mindset "let's trim the fat." The difference is significant...
EDIT: Sorry about the massive body of the post....but once you get me going, you have to force me to stop. I could discuss software and hardware all day.
Very valid observation, and one I agree with for the most part. Perhaps I should've addressed this nuance in my last post, but such is the nature of hindsight and getting a message out without excess verbosity.
Even in these technical analyses, we see a lot on the surface, but we don't see what's happening behind that surface in terms of the software techniques and the manner in which they are intended to interact with the hardware. I'll even confess that with DOOM (2016), something about the PlayStation 4 version -- when compared with the PC port scaled to medium-high settings with no sharpness filter and a sub-1080p resolution -- just doesn't measure up enitrely.
Indeed there is a difference when it comes to comparable hardware between consoles and PC due to the software side and even the APIs involved, as well as the unfortunate nature of industry politics. It's just that I find that sometimes the disparity in the whole 'coding to metal' argument gets overstated when there are other factors at play, such as optimisation and certain other hidden software shortcuts that you detailed in your response.
Also, when it comes to the wide majority of newer demanding gaming software that I have come across, most of those cards (GTX 670, 760, 680, 770, 780, etc.) that I mentioned outdo what I see on the console ports and with better performance and graphical enhancements to boot. Back when this console generation was still picking up momentum, Alien Isolation was becoming a benchmark for many and the in-house engine by Creative Assembly got quite a bit of attention. Oddly enough, compared to the current consoles, I was able to max the game out -- without much effort -- and squeeze out marginally better performance from a mobile GTX 580M (a slightly slower desktop GTX 560 Ti), despite that card being abjectly older and slower than what was on offer among the consoles.
The essence of my contention being that many of these older high-end cards that aren't as novel as say, the GTX 970 and 960, but aren't profoundly outdated and unsupported tech, are still quite a decent margin ahead of the console hardware, can provide some appreciable headroom, and as such, should still remain relevant and accommodated options (on the software side) for PC gamers.
Of course, where possible, one should just have as much additional hardware headroom as possible to compensate for the worse-case scenarios.
Off topic: True, but when you don't know something especially online you tend to suffer alot of flak. Although tbh my knowledge is a bit hit and miss on the topic, some places i know alot about, some a moderate amount and some not at all.
But as you stated earlier (although more from a PC consumer POV) i don't really like techs like DSR simply because i like my games to look the best they can, even if the game uses lower quality textures if it can run on 1080p or 4k and still maintain a decent, stable frame rate then i'm happy.
As stated above, everyone (including console kids suprisingly) like to play at a higher resolution and even sometimes resorting to cheap tricks just to keep that resolution. While that is understandable sometimes you just have to make sacrifices in the name of performance like a lower resolution, lower detail settings or both. That said, to a lesser degree techs like DRS could help ease the pain after a bit of thought on the subject.
To address paragraph one. DSR is dynamic super resolution, which comes from "downsample resolution" or custom resolutions. It is designed to increase image fidelity by operating at a higher resolution than the client's native display (increased pixel density). In other words it is a GOOD thing, as a matter of fact great! It is similar to engines that have a "resolution scaling" option that extends beyond 100%. Thus, the image is increased in resolution "internally" within the engine itself. All good things.
Whereas, "dynamic resolution" is not good for image quality, and in fact has been proven time and again to result in "lossy" image fidelity due to a lower pixel concentration in both the geometry, as well as the overall resolution.
I think what you are trying to say, and correct me if I'm wrong, is that you want to have the benefits of high-resolution image quality, without the "overhead" of applying downsample to the image output.
This isn't possible without one of two things: 1) Lower demanding textures, assets count, geometric complexity (polygon maps ect.), lower demanding light souces (e.g. static lighting), and so on. Or, 2) More powerful hardware to meet the increased throughput demand. The later being preferable in terms of overall IQ (image quality).
Example: A 980ti SLi configuration is capable of operating Grand Theft Auto V at 3840x2160 (DSR), with maximum available quality settings being used in-game, on a NATIVE 2560x1440, or 1920x1080 display. The result is a far greater image quality output, versus using that of in-game anti-aliasing at the NATIVE resolutions of 2560x1440 or 1920x1080. This is possible on this graphics processing unit configuration as the VRAM allocation space, shader count, and fill-rate all meet the requirements to do so. This example is just one of many, and these DSR outputs change based on both the title, and the graphical rendering capability of the card & other supporting hardware.
Now, the "lossy" version, or performance oriented version of this, would be having a native resolution display output of 2560x1440, or 1920x1080, and using a resolution scaling of 90% or lower. This is due to rendering the resolution in-game lower than the NATIVE display output. In turn, this reduces the throughput demand of the graphics processing unit, as well as other supporting hardware.
The later example is more attuned to the aforementioned thread subject of "dynamic resolution." Although the resolution changes "dynamically," it is effectivly the same graphical mechanic.
Understand? I already tried to explain this previously in the thread, but perhaps this version is a little more coherent.
Somewhat, it will probably click better later as my thought processes can be quite slow and my ♥♥♥♥♥♥ argument no doubt reflects that.
It is like I told the other parties involved. I don't judge people based on their actions on a public forum, as I am not that petty. It is easy to become frustrated, say things you don't mean, and discuss things you probably should not.
Don't worry about how I feel about this thread, as it has started to come back around. If I had a dollar for everyone on Steam that wants to take my head off...I would own Valve. Just ask around, I have made a lot of people upset around here. That's how I like it though...that means they are listening. Nobody likes to be wrong, and more times than not people feel bad after they act on emotions instead of logic. As long as you aren't arguing with me, I just enjoy the show. *shrug*
I see, is it strangely ironic that i somewhat respect you then? As i've got a similar thing happening to me.
I will take somewhat over not respecting me at all.
Also, nobody enjoys confrontation, and I am sure you didn't mean to act as you did. People say things when emotions get in the way. If I were you, I would say you were sorry to Jackal. She is a good girl, and nobody deserves to be treated that way. Right or wrong. It would make me feel better if you did. Then everyone can forget the nonsense and move forward to more productive things. It's your call. Consider it a chance for reconciliation, and I want us to be friendly moving forward.
Ok, i'll take your word for it.
@Jackal sorry about what happend, like i said my thought processes can be quite slow and as a result i acted the way i did.
While i do still sit on my point about DRS not being useful i'd say a technology that does something similar but achieves the same goal with a comparable if not better result may actually further increase the lead PC gaming has over console gaming. That said, while a lead increase will be good hopfully with convincing the industry that console gaming is a bad way to go it would prove useful to at least wait a little before using such a technology so we get the most benefit out of it.
Fair enough, and I am glad you apologized. That was the right thing to do.
The main thing that is harming PC gaming software development, is cross-platform software. Period. It harms the entire industry as it limits progress in gaming software technology. The industry is only able to move at a pace dictated by the capabilities of console counterparts. Sure there are PC technology demos ect., but most of the time, it takes years for them to work their way into the majority of gaming software.