ติดตั้ง Steam
เข้าสู่ระบบ
|
ภาษา
简体中文 (จีนตัวย่อ)
繁體中文 (จีนตัวเต็ม)
日本語 (ญี่ปุ่น)
한국어 (เกาหลี)
български (บัลแกเรีย)
Čeština (เช็ก)
Dansk (เดนมาร์ก)
Deutsch (เยอรมัน)
English (อังกฤษ)
Español - España (สเปน)
Español - Latinoamérica (สเปน - ลาตินอเมริกา)
Ελληνικά (กรีก)
Français (ฝรั่งเศส)
Italiano (อิตาลี)
Bahasa Indonesia (อินโดนีเซีย)
Magyar (ฮังการี)
Nederlands (ดัตช์)
Norsk (นอร์เวย์)
Polski (โปแลนด์)
Português (โปรตุเกส - โปรตุเกส)
Português - Brasil (โปรตุเกส - บราซิล)
Română (โรมาเนีย)
Русский (รัสเซีย)
Suomi (ฟินแลนด์)
Svenska (สวีเดน)
Türkçe (ตุรกี)
Tiếng Việt (เวียดนาม)
Українська (ยูเครน)
รายงานปัญหาเกี่ยวกับการแปลภาษา
And the absolute worst case is OP decides to build a new system and they already have a GPU to use.
Again unused gpu power can be always used. A gpu bottleneck is never a problem. Yes you are limited in terms of fps and cant expect to have 144fps in every game but its very easy to play on much higher resolutions.
OP was not asking if their Core i5 8400 would perform as good with a RTX 3070 as a faster CPU would. OP was asking if it would work, and if it would work at least as well as it does now. That's it. Bringing up the "you can't do that because it'll bottleneck" rally is silly especially because OP is planning on upgrading the platform later anyway.
What exactly defines "full potential" though? The fastest CPU available today isn't going to entirely eliminate CPU bottlenecks, which is why the idea that there is some objectively "correct" balance that you "have to have" is just outlandish.
This! This this this this.
Yes, you might not get as much performance with a slower CPU than with a faster one, but... that's a given, no? You can't argue the economical standpoint of "not getting the best return" when a platform upgrade isn't free. For example, a GPU may cost 100%, and give 50% improved performance, but a GPU upgrade and platform replacement may cost 225% but it probably won't return 112% performance instead. That's just one example and isn't always the case mind you; there are times where a platform may be slow enough to consider replacing it too, but this modern focus that you HAVE to goes against the entire reasoning that is supposed to be behind it a lot of the times, that idea being getting the best return for the cost. Some people are too focused on the best return period, and they're now saying you have to do it.
All this being said, it's especially silly to be arguing about this because as I pointed out, OP is upgrading later anyway.
Tried looking up more online about i5 8400/rtx3070Ti and couldn't come up with much. I think OP's questions have been answered enough, quite honestly.
First, the OP has already made his decision, I think, and will upgrade in the future, so it is irrelevant. But you need to go read my other posts. I don't think you quite understand what I am trying to get at.
There is a difference between a CPU bottleneck, where that is the weak link of the system in relative terms compared to the GPU, and a "real" CPU bottleneck where the CPU is simply not face enough to reproduce the frames that the GPU is rendering.
In a sense I always have a CPU bottleneck. Meaning that my CPU, in relative terms to the GPU, is not as powerful as the GPU. That is fine, as long as I am able to see my GPU get 100% GPU usage and see its full potential. I actually never have a seen "real" CPU bottleneck until I got my 3070 Ti.
Before then, every CPU and GPU combo I had, at no matter what resolution I played at I was able to get 100% GPU usage. Especially in a benchmark like Firestrike. When I got my 3070 Ti, I paired it with my 4770K, and I saw tremendous bottleneck. Even Firestrike, a synthetic, GPU intensive benchmark, which is rendered at 1080p, I got maybe 50-60% GPU usage. Even Firestrike Extreme, which is rendered at 1440p, I couldn't get 100% usage. And in CPU intensive games, forget it. Never had I seen this before, this was my first experience with "real" CPU bottleneck. Not just that my CPU is the bottleneck of the system.
When I got my 10700K, this relieved quite a bit of this bottleneck. At 1440p, and 4K, I see full GPU usage. In Firestrike Extreme, rendered at 1440p, I see full usage, Firestrike Ultra, rendered at 4K, I see full usage. 1440p and 4k games I see full GPU usage. However, in regular Firestrike, rendered at 1080p, and in CPU intensive games at 1080p, I still won't get full GPU usage. So, at 1080p, my 10700K still bottlenecks my 3070 Ti. Never have I seen bottleneck like this until I got my 3070 Ti. It is well documented online. Even high end, or relatively high-end CPUs, will bottleneck this generation's GPUs at 1080p.
I have explained this extensively numerous times in my previous posts. But I don't blame you if you haven't read them all, lol. I do tend to write quite a lot, especially on this subject. But I have had first hand experience with CPU bottleneck. There is a tremendous difference between a bottleneck in the terms of it being the weak link of the system and a "real" CPU bottleneck where the CPU is simply not fast enough to reproduce the frames that the GPU is rendering. Meaning the CPU will be at 100% usage and the GPU will struggle to make use of 50% of its ability. That is definitely not something you want, especially when you pay quite a bit for these GPUs.
So, at 1440p or 4k, the OP may not see too much bottleneck when pairing a i5 8400 and a 3070 Ti together. But at 1080p, where you are incredibly CPU limited, the GPU will struggle to see anywhere near its full potential in CPU intensive games.
https://www.youtube.com/watch?v=yH2nX4giIc4
This video illustrates CPU bottleneck between a 10700K and a 3080. Notice how at 1440p and at 4K, GPU usage will be at, or near, 100% in pretty much all the games. But in CPU intensive games, like Mafia, Flight Simulator, Far Cry 5, GTA5 to some extent, even Fortnite Competitive, etc. at 1080p the 3080 is not getting 100% GPU usage. This is caused by CPU bottleneck. Now, in games on the list that are GPU intensive, you will see GPU usage at 100% at all resolutions. But in the CPU intensive ones, the 10700K still bottlenecks the 3080 quite a bit, especially at 1080p. I have noticed the same thing with my 10700K and my 3070 Ti. Heck, even at 1440p in some instances there is still bottleneck.
Flight Simlulator really shows the CPU bottleneck. At 1080p, you are getting much less GPU usage than at 1440p, and the FPS is the same at 1080p and 1440p. Not something you want. That is CPU bottleneck. And it also shows why GPUs like the 3070 ti and 3080, and higher, are designed for 1440p and 4K resolutions, not 1080p.
A CPU is either the weakest link, preventing what would otherwise be more performance in a given moment, or it is not. The only additional consideration is how much of a limitation it may be.
I've read your posts on this before many, many times (I tend to write a lot myself at times so wordy posts don't scare me), and apologies if this comes off as dismissive, though they basically seem to come down to suggesting the idea that there's this pseudo-objective correct balance you have to strive for, with your anecdotal experiences of having witnessed and alleviated a bottleneck, and video examples of bottlenecks, as your reasoning.
I heavily disagree with the notion. The idea that there's an objective minimum you have to have goes against the facts in world where performance balance just varies just so much. Yes, these things obviously exist, and yes it is important to be aware of these things. There's no disagreement there. But you're missing the forest for the trees. Go read the post by Snakub Plissken as to what I mean. In what world do you go "I shouldn't improve my experience because my CPU doesn't perform as fast as a faster CPU"? In what world do you go "I shouldn't improve things by 65% because I'm not instead getting up to 85% like I would with a faster CPU". In no world do you do that. If you're looking at spending a lot and only getting a mere fraction of what the GPU would otherwise perform at, then yes, your platform is too slow and needs some attention, but OP's arrangement would nowhere near that bad off.
Worse, the entire idea behind being aware of these issues is to make sure you don't make such a woefully imbalanced configuration that you are spending poorly, right? Yet, then you realize that the small difference a faster platform might net you could increase your costs disproportionately compared to the returns. That's the exact OPPOSITE of making sure you don't spend poorly.
There's plenty of people running mid-upper end RTX 3000 level GPUs (like RTX 3060 Ti/RTX 3070) on CPUs with the per core performance of Coffee Lake/Zen 2. No, it's not objectively wrong just because a faster CPU would instead get faster performance.
Edit: Rereading what you wrote. I have to say you are completely missing the point. This isn't a situation where your CPU is just not as fast as another CPU and thus you will get less performance. What I am trying to point out is a situation where the CPU is simply not fast enough to allow the GPU to be utilized completely. "True" CPU bottleneck. I am not saying that this is what the OP will encounter, but just trying to illustrate the point. It is real. Anyway, read on.
I am not saying it is wrong, and I get what you are trying to get at. And I know there are plenty of people that use an RTX 3060 Ti/3070/TI with Coffee Lake and Zen2 procs. Again, unless you are playing at 1080 trying to match a high refresh rate, those procs will be fine to see the GPUs full potential. At 1080p though, even with a 3060, you will run into CPU limitations. Which is still fine, though. For the most part, you are still going to be at a high enough FPS in most games that it won't matter.
But what I was trying to get at with my posts, wasn't so much related to what the OP is trying to do. So, I know it is kind of pointless and wasting people's time in a sense since this thread should be about the OP. But I tend to do that a lot, lol. No, what I was trying to do was show the difference between what a general system bottleneck was and what I consider a true CPU bottleneck to be.
The way you are explaining it, and the justification you are giving, is more in tune with the sense that you will always have a system bottleneck, or a weak link so to speak, and that eliminating CPU bottleneck completely is impossible. In that sense you are right.
But what I am just trying to point out is that I felt the same way until I encountered true CPU bottleneck when I bought my 3070 Ti. Again, this is not necessarily for the OP, or stating that he would have the same experience I had if he paired his i5 8400 with a 3070 Ti. Well, I still think he would at 1080p, and some CPU intensive scenarios even at 1440p.
Anyways, I had always read about CPU bottleneck. Where you see 100% CPU usage and are unable to get full GPU usage. I had never actually seen this before. Like I said, before I always thought that I had some CPU bottleneck because I would keep a CPU for multiple generations and upgrade GPU several times. I got my i7 4770K right when it came out in 2013 and paired it with a GTX 780 that was coming out at the same time.
And I have been using the 3DMark Firestrike benchmark since even before then. In the time that I had my 4770K, I had the GTX 780 for two years, I upgraded to a GTX 980 Ti in 2015, in 2016 it got replaced by a GTX 1070, then I bought another 1070 and SLi'd them, then I had to sell my 1070s in 2020, and my friend gave me a 1070 Ti to put in my PC. I ran the Firestrike benchmark, which is rendered at 1080p, with all those configurations and I always had 100% GPU usage with all those configurations. Same thing in games. I was getting full GPU usage.
Last year when I decided that I wanted to upgrade my GPU to a 30 series GPU, I figured that the 4770K wouldn't necessarily bottleneck the GPU that much. I figured maybe at 1080p I would see some, but I was playing at 4K so I didn't think it was going to matter much. Boy was I wrong. First thing I ran was 3DMark Firestrike. The GTX 3070 Ti was getting like maybe 50 to 60% usage. In some spots lower. I had never, ever seen CPU bottleneck like this. Even in Firestrike Extreme, rendered at 1440p, I didn't get full usage. However, when I ran Firestrike Ultra, rendered at 4K, I got 100% GPU usage. But still, I knew right away that this was not an ideal setup. The 4770K bottlenecked the 3070 Ti like nothing I had ever seen.
Anyway, I know you have heard all this, and long story short, I built a new PC with a 10700K. And in Firestrike Extreme/Ultra, Timespy and Timespy Extreme I get full GPU usage. I am also getting full GPU usage in all my games. I play at 4K, and if not it is with DLSS or some other resolution scaling, so at the least 1440p, or something like that, so the 10700K is great with the 3070Ti. But as I have said, even in Firestrike, rendered at 1080p, the 3070 Ti still doesn't get 100%. And if I played CPU intensive games at a resolution of 1080p I would see CPU bottleneck.
Again, lol, you heard all this, but that was just my point. I was just trying to illustrate how severe CPU bottleneck can be. I am not trying to detract the OP from getting a 3070 Ti. Especially if he plans to upgrade. But even if he doesn't, if playing at 1440p or 4K, he may never see an issue. But if he did play many CPU intensive games and he was at 1080p, and he wasn't planning on upgrading, I would say that he would want to upgrade both CPU and GPU.
Because the point I am trying to make, is that you do not want to pay several hundred dollars on a GPU and then see it only be able get 50% GPU usage out of it. Because that is a scenario where you see your new GPU, which is supposed to be twice as fast as your old GPU, and you are scratching your head wondering why the performance is not any better than your old GPU.
So, again, I have wrote way too much, lol, but I was just trying to illustrate the difference between your CPU just not being as fast as other CPUs, and your CPU severely bottlenecking your GPU where your GPU is not capable of running at 100%. I am also, in a sense, trying to point out how powerful these current generation GPUs are, and how powerful future generation GPUs will be. You need a fast CPU. But again, not trying to detract the OP in any way. To the OP, get that 3070 Ti already!
I just don't subscribe to the thought process that you HAVE to have a certain amount of CPU capability for a certain GPU because there is no objective correct balance because the ratio varies so, so wildly. The "Golden rule" I instead subscribe to is "get the best you can afford at the time". If you have an aged platform and go to do a GPU upgrade, I don't agree with the mindset that you shouldn't do it just because you'll get less performance than you would with a faster platform. I mean, that's an obvious given; faster stuff is faster. It in no way makes it necessary to have among the fastest platform to make a GPU upgrade worthwhile, because by time you swap that platform out you might be paying two or more times the cost for that extra fraction of a difference. Not everyone has that sort of disposable income to replace everything every few years, nor is it necessary.
I am sorry, but it is one thing to have a GPU on a platform that is just not the fastest and get acceptable performance knowing you would get better performance on another platform, and another thing completely to have a platform that is simply not fast enough for a given GPU. Thus, you will not be able to get anything close to the performance out of it that you should. And you paid several hundred dollars.
I bought my i7 4770K in 2013. I didn't upgrade my CPU till 2021. That is 8 years of using the same CPU. In that time, it saw a GTX 780, a GTX 980 Ti, a GTX 1070, 2 GTX 1070s in SLi, a GTX 1070 Ti, and an RTX 3070 Ti. The i7 4770K didn't bottleneck any of those GPUs except the 3070 Ti. Meaning that I was getting full GPU usage out of all those GPUs with the 4770K. It wasn't until I bought the 3070 Ti that I saw bottleneck so severe that it made not having that combo worthwhile. So it was time to build a new PC on a new platform.
So, I may upgrade GPUs every few years, but I don't upgrade platform every few years. I made my last platform last me almost a decade. That platform saw many different levels of graphics. But it comes to a point where that platform is simply not fast enough anymore to drive the said graphics. That is just the way it is. And I am sure most will agree with me on that one.
Again, reading what you wrote, I don't think you quite understand the concept I am trying to point out. What you are talking about and saying is that you don't need to have this perfect balance of parts. And you are right. And if you look at my past history of configurations you will see that I never have a perfect balance of parts. I still don't. And for some reason you are thinking that that is what I am saying, when clearly I am not. Again, there is a difference between having a slower platform than the best and it still being able to drive your GPU, and having a platform this simply too slow for said GPU, and the CPU will not be fast enough to reproduce the frames that the GPU is rendering.
I guarantee you that is not the case with your 3700X and GTX 1060. There may be situations where that appears to be the case because of a games coding or whatever. But go try a GPU intensive game, actually go run 3DMark Firestrike. I guarantee you, you will see 100% GPU usage. My 4770K was severly bottlenecking my 3070 Ti. It was unacceptable to me. My 10700K still bottlenecks it at 1080p. That just shows me how powerful these GPUs are. "True" CPU bottleneck is real and it does exist. Not something you want.
A 3070 Ti isn't going to work fine with just any CPU, try pairing that with a Core 2 Duo or Turion and see how that works out.