A10 7870k or 7850k?
im just wondering which is better to replace my A10 6700
< >
Visar 61-75 av 95 kommentarer
Ursprungligen skrivet av Etnopluralism:
If you have a dedicated graphics card you could go with the Athlon 860K since that's somewhat cheaper because it lack integrated graphics card.

All of those processors are slow though. I doubt you'll be all that happy with an upgrade.

A10 7870K + a R9 390 is a complete waste, replace both motherboard and processor if you're going to get a graphics card like that. New graphics cards may just be 2-3 months away though. (New APUs may just be one month away but for a different socket.)

The motherboard seem to be of Micro-ATX size.

I'd suggest selling the one you have and get a used i5 2500, i7 2700K, i7 3770K, i5 4460, i5 4690K, i7 4770K, i7 4790K or buy a new i3 6100 or better like the i5 6500 or i5 6600K or such and then use whatever graphics card you want like the GTX 750Ti, 950, R9 380, GTX 970, R9 390, Fury Nano, 980Ti with that but maybe more likely wait for a new generation and get one of those.

An A10 processor likely cost more than a used i5 2500K with motherboard? Or about the same and the later is better.
If you just get a used motherboard and processor though your case may just hold Micro-ATX motherboards. If you sell the machine and get something else then that's less of a problem.

The A10 may not be worth much used but then again you don't need to pay much for a used i3 or i5 system either so ..

so could i get the 860k and r9 380 for now then later just get a new motherboard and i5 4690k
Ursprungligen skrivet av Readrd:
Ursprungligen skrivet av Rove:
Would be pretty good but strongly urge you not to replace the A10-6700 and test with all the other parts to see if it's good enough. might be no point to getting the Athlon 860K.
\
You can only view 60 FPS on standard 60 Hz monitor.

im trying to see what apu graphics are like but i only get 1 fps when its meant to be better than the dedicated gpu i have. why is this?
Here's some pointers tho, from someone who built PCs to spec for clients.

If you're going to be doing word documents and youtube videos, AMD APUs are the way to go. They're cheap, they're still reliable, and by time they will need to be replaced, the machine will be outdated anywyas.

For a Gamer or graphics design. Intel is really the only suitable option. Even with water cooling and OC'd, AMD is only a mid-range at best processor. There is literally no other option right now other than Intel. If you are brand loyal, then you need to wait till the end of the year. The information leaked suggests that the new AMD processors will be equivalent to 4th gen i-series processors. But no guarantee cause that's not from the Chairwomans mouth.

Graphics card-wise it depends on a few things.

1. Do you play AAA titles or smaller games?
-> Larger Titles are almost always Nvidia optimized and therefor will run better with Nvidia. However, the moment you get OpenCL or other Open Source options involved, you start having issues with Nvidia. Performance tests will also show that AMD GPUs are really the best bet for smaller titles using open source technology.

2. The monitor size?
-> Again we are going to get into limited bandwidth. This hasn't but recently been an issue. Nvidia has always had a traditionally smaller memory bus on their GPUs. While this works perfectly well for your average 2k display, 4k is another issue entirely. AMD at this point has no compitition because their GPUs have more memory bandwidth then they are capable of using. Performance tests in this area also show that even in some Nvidia Optimized games, AMD will still perform better in many instances, at 4k.

EDIT: Didn't even consider RAM. Just assumed that since performance is comparable with my 8year old cpu to that 860k, that the integrated GPU would also be on an 8yr old chipset.
Senast ändrad av Mystic Referee; 3 apr, 2016 @ 15:53
Rove 3 apr, 2016 @ 15:49 
Ursprungligen skrivet av Readrd:
Ursprungligen skrivet av Rove:
Would be pretty good but strongly urge you not to replace the A10-6700 and test with all the other parts to see if it's good enough. might be no point to getting the Athlon 860K.
\
You can only view 60 FPS on standard 60 Hz monitor.

im trying to see what apu graphics are like but i only get 1 fps when its meant to be better than the dedicated gpu i have. why is this?

Like I told you it's probably because of your RAM.
1. you don't have enough RAM.
2. your RAM is too slow and likely running in single channel mode or something.

Anyhow the Athlon 860K is not going to offer a significant enough upgrade to the A10-6700 to justify the cost so I strongly suggest not getting it. A new graphics card is going to be the best investment for improving your performance, buy a great graphics card then que up your CPU as the next pending upgrade in a few years. A10-6700 is still plenty good as a CPU. The integrated graphics are not great but with proper RAM should be better than your current graphics card.

Here's what to do:

Buy this right away with your 100 GBP and put any that's left over into the bank for later:

Kingston HyperX Fury Red 16GB (2 x 8GB) DDR3-1866 Memory
Thermaltake Versa H24 ATX Mid Tower Case
http://uk.pcpartpicker.com/p/HFYdNG
Total: £86.66

That will allow you to test out your integrated graphics properly. They should be slightly more than twice as good as your current dedicated graphics card. With your monitor plugged into the motherboard you may also be able to enable crossfire (if compatible) with your current graphics card. If crossfire is not available remove the HD 8450 and store, give or sell it.

Next when you get your 300 + 13~ GBP you had banked and buy something like either this:

PowerColor Radeon R9 390 8GB PCS+ Video Card
Deepcool 750W 80+ Gold Certified ATX Power Supply
http://uk.pcpartpicker.com/p/JpCmVn
Total: £309.54

OR this:

Seagate 1TB 3.5" 7200RPM Hybrid Internal Hard Drive
Asus Radeon R9 380X 4GB Video Card
Deepcool 550W 80+ Gold Certified ATX Power Supply
http://uk.pcpartpicker.com/p/QTt3f7
Total: £302.29

Of course prices may change so you can check back on the forums when you get the second money and ask about GPU upgrades, PSU upgrades and anything else you can upgrade.

I basically already posted this same recommendation in different words earlier and I just noticed one of the links was messed up so I'm going to fix it.
Senast ändrad av Rove; 3 apr, 2016 @ 15:54
Ursprungligen skrivet av Readrd:
so could i get the 860k and r9 380 for now then later just get a new motherboard and i5 4690k
By then you'd likely rather get the i5 6600K (today, if new) or a Zen processor (say in October or 2017 or whatever they end up launching.)

Seem like the A10 6700 was a 3.7 GHz quad-core chip.
Athlon 860K is a 3.7 GHz quad-core chip too.
7850K is a 3.7 GHz quad-core chip.
7870K is a 3.9 GHz quad-core chip.

Such difference on all of them? ...

I don't know how large the performance increase is, seem to be slim for 6700 to 7850:
http://cpuboss.com/cpus/AMD-A10-7850K-vs-AMD-A10-6700
(Except integrated graphics?)
7870K a bit (20%?) faster and uses less power than the 6700?
http://cpuboss.com/cpus/AMD-A10-7870K-vs-AMD-A10-6700

860K vs 7870K:
http://cpuboss.com/cpus/AMD-Athlon-X4-860K-vs-AMD-A10-7870K

I don't think you should bother really. They are all similar and weak regardless :/

I still assume you could get an i5 2500 system for about the same money and that will be faster:
http://cpuboss.com/cpus/Intel-Core-i5-2500K-vs-AMD-A10-7870K

I don't think you should waste your money on such a small upgrade :/
Either get something noticable better or don't bother I'd say.

Where do you live? Any chance you could get hold of a used Micro-ATX motherboard and i5 processor for instance? Or have an interest in selling what you have and get some other better used system from someone else?
upcoast 3 apr, 2016 @ 18:00 
I don't know how else to explain it but here goes, a10 6700k to 860k is a waste of money and time.

Beef up the gpu + psu if needed r9 380/380x or gtx960, the r9 390/gtx970 only if you really are going to swap in the upper i5/i7 .

Ps, the ram 2GB+4GB makes 6GB single channel mode.



Senast ändrad av upcoast; 3 apr, 2016 @ 18:06
Ursprungligen skrivet av Kid Ayy:
1. Do you play AAA titles or smaller games?
-> Larger Titles are almost always Nvidia optimized and therefor will run better with Nvidia. However, the moment you get OpenCL or other Open Source options involved, you start having issues with Nvidia. Performance tests will also show that AMD GPUs are really the best bet for smaller titles using open source technology.

2. The monitor size?
-> Again we are going to get into limited bandwidth. This hasn't but recently been an issue. Nvidia has always had a traditionally smaller memory bus on their GPUs. While this works perfectly well for your average 2k display, 4k is another issue entirely. AMD at this point has no compitition because their GPUs have more memory bandwidth then they are capable of using. Performance tests in this area also show that even in some Nvidia Optimized games, AMD will still perform better in many instances, at 4k.

EDIT: Didn't even consider RAM. Just assumed that since performance is comparable with my 8year old cpu to that 860k, that the integrated GPU would also be on an 8yr old chipset.
1) I'd rather say it seem like the R9 390 and the R9 380 got the performance / $ advantage for now but they use a bit more power with the R9 390 being the more extreme card and difference. I doubt the problem is "optimization" especially since the supposedly smaller titles run better on AMD cards but rather than the huge titles get GameWorks added onto them which is Nvidia middle-ware which adds physics and shading and lighting and hair effects but which do so in ways which affect the performance on Nvidia cards less than on AMD cards and hence once added the AMD cards become further behind as long as that stuff is enabled. The smaller titles may not have GameWorks added so that never happens.

Nvidia just open-sourced parts of it and AMD likely have learned their leason in letting Nvidia abuse them by using a lot of tessalation (which AMD was having support for first) and maybe had to accept the situation and increase support for that in the next line of graphics cards, who knows.

2) In the case of the R9 390 most of the cards simply have 8 GB of VRAM vs 4 GB (of which only 3.5 run at a decent speed) on the GTX 970. That may help with WQHD and 4K games and it does in many titles.
On the other hand in many of the cases one could make the claim that the cards with the lower amount (4 GB in that tier and 2 GB in the one below and 1 TB in the low-end one) still have enough of it because if they use larger textures and are used in higher resolutions they may not really have the performance do run the game at a decent frame-rate whatever they have less or more VRAM anyhow.
But yeah, the 8 GB R9 390 may do WQHD gaming better than the 4 GB GTX 970.

In regard of graphics cards I would kinda wait if I were looking at something like the R9 390 though. The R9 380 is at-least cheaper and not one of the most expensive cards.. So if desperate for a new one then maybe that. But what if new ones are just two months away?!
Ursprungligen skrivet av Etnopluralism:
1) I'd rather say it seem like the R9 390 and the R9 380 got the performance / $ advantage for now but they use a bit more power with the R9 390 being the more extreme card and difference. I doubt the problem is "optimization" especially since the supposedly smaller titles run better on AMD cards but rather than the huge titles get GameWorks added onto them which is Nvidia middle-ware which adds physics and shading and lighting and hair effects but which do so in ways which affect the performance on Nvidia cards less than on AMD cards and hence once added the AMD cards become further behind as long as that stuff is enabled. The smaller titles may not have GameWorks added so that never happens.

Nvidia just open-sourced parts of it and AMD likely have learned their leason in letting Nvidia abuse them by using a lot of tessalation (which AMD was having support for first) and maybe had to accept the situation and increase support for that in the next line of graphics cards, who knows.
That's kind of what I was getting at with "Optimization". But fair point, AMD has been getting kicked around with tessalation.



Ursprungligen skrivet av Etnopluralism:
2) In the case of the R9 390 most of the cards simply have 8 GB of VRAM vs 4 GB (of which only 3.5 run at a decent speed) on the GTX 970. That may help with WQHD and 4K games and it does in many titles.
On the other hand in many of the cases one could make the claim that the cards with the lower amount (4 GB in that tier and 2 GB in the one below and 1 TB in the low-end one) still have enough of it because if they use larger textures and are used in higher resolutions they may not really have the performance do run the game at a decent frame-rate whatever they have less or more VRAM anyhow.
But yeah, the 8 GB R9 390 may do WQHD gaming better than the 4 GB GTX 970.

In regard of graphics cards I would kinda wait if I were looking at something like the R9 390 though. The R9 380 is at-least cheaper and not one of the most expensive cards.. So if desperate for a new one then maybe that. But what if new ones are just two months away?!


VRAM amount has nothing to do with it. I'm refering to a 256-336 bit (Nvidia) vs. 512-4096 bit (AMD) bus width. In other words, Nvidia just doesn't have the bus width to keep up with AMD at 4k and over.

Basically Nvidia put 8gb of GDDR5 on about 8 channels. While AMD put 4gb of GDDR5 on 16 channels. So if you're trying to push that 8gb of GDDR5 on an Nvidia chipset, it's going to be bottlenecking. But that 4gb on the AMD will be used much more efficiently. Moving lots of pixels down those channels is going to be more effective over 16 channels rather than 8 regardless of how much vRAM is there. Which is where there is usually less performance loss at higher resolution with AMD. Nvidia is kind of choking it's GPUs.
Senast ändrad av Mystic Referee; 3 apr, 2016 @ 18:22
_I_ 3 apr, 2016 @ 18:45 
amd and nvidia gpus work completely different
you cannot compare their specs by the numbers like that

gtx 960 and r9 380 are very close in performance, and the numbers are all over
http://www.hwcompare.com/19768/geforce-gtx-960-vs-radeon-r9-380-2g/

and the 980ti is alot faster than a 390x
http://www.hwcompare.com/20369/geforce-gtx-980-vs-radeon-r9-390x-8g/
Fluffy 3 apr, 2016 @ 18:49 
Ursprungligen skrivet av _I_:
amd and nvidia gpus work completely different
you cannot compare their specs by the numbers like that

gtx 960 and r9 380 are very close in performance, and the numbers are all over
http://www.hwcompare.com/19768/geforce-gtx-960-vs-radeon-r9-380-2g/

and the 980ti is alot faster than a 390x
http://www.hwcompare.com/20369/geforce-gtx-980-vs-radeon-r9-390x-8g/

he didnt hes talking about the memory bus for 4K GAMING and hes correct as memory channels and width become very important at higher resolutions
Ursprungligen skrivet av _I_:
amd and nvidia gpus work completely different
you cannot compare their specs by the numbers like that

gtx 960 and r9 380 are very close in performance, and the numbers are all over
http://www.hwcompare.com/19768/geforce-gtx-960-vs-radeon-r9-380-2g/

and the 980ti is alot faster than a 390x
http://www.hwcompare.com/20369/geforce-gtx-980-vs-radeon-r9-390x-8g/

We were actually kind of going over the information that you actually just provided. If you notice the differences that they perform on certain levels. We were kind of reviewing the significance of those and where one might expect to see those limitations manifest. Also we were going a bit more in depth than those numbers.

But yeah, it's been shown that at your standard resolution, the 980ti typically outperforms the 390x. The only exceptions are for games that are obsessed with moving large files into the GPU at alarming rates. But I can't name any off of the top of my head at this point in time.
Senast ändrad av Mystic Referee; 3 apr, 2016 @ 18:56
vadim 3 apr, 2016 @ 19:13 
Ursprungligen skrivet av Kid Ayy:
VRAM amount has nothing to do with it. I'm refering to a 256-336 bit (Nvidia) vs. 512-4096 bit (AMD) bus width. In other words, Nvidia just doesn't have the bus width to keep up with AMD at 4k and over.
That isn't true. First, bus width itself means absolutely nothing. You need to calculate memory bandwidth which is equal effective clock rate * bus width.
GTX 980 Ti has 384-bit bus (there cannot be "336-bit buses". This number is not divisible by 32) and 7GHz effective clock rate. 384*7/8 (because 1 byte == 8 bits) = 336MBps.
While AMD 290x has 512-bit bus and 5GHz clock rate (I will omit word "effective" in the future).
512*5/8=320. I.e. bus is 33% widther, but bandwidth is lower.
Lets look at Fiji: its VRAM clockrate is only 1GHz.
1*4096/8 = 512 MBps. Only 40% more bandwidth.
But that isn't all. There are several techniques that reduce needed bandwidth. Such as compression and caching. Delta color compression, for instance, is almost the same in AMD and Nvidia GPUs. While Maxwell has more L2 (last level) cache which allows to decrease VRAm bus usage. And so on...
Ursprungligen skrivet av Kid Ayy:
So if you're trying to push that 8gb of GDDR5 on an Nvidia chipset, it's going to be bottlenecking. But that 4gb on the AMD will be used much more efficiently. Moving lots of pixels down those channels is going to be more effective over 16 channels rather than 8 regardless of how much vRAM is there. Which is where there is usually less performance loss at higher resolution with AMD.
Another misconception. Nobody ever needs to read or write all VRAM except during memory tests. And pixels themselves occupy only dozens of MB in VRAM. At any resolution.
Fluffy 3 apr, 2016 @ 19:20 
Ursprungligen skrivet av vadim:
Ursprungligen skrivet av Kid Ayy:
VRAM amount has nothing to do with it. I'm refering to a 256-336 bit (Nvidia) vs. 512-4096 bit (AMD) bus width. In other words, Nvidia just doesn't have the bus width to keep up with AMD at 4k and over.
That isn't true. First, bus width itself means absolutely nothing. You need to calculate memory bandwidth which is equal effective clock rate * bus width.
GTX 980 Ti has 384-bit bus (there cannot be "336-bit buses". This number is not divisible by 32) and 7GHz effective clock rate. 384*7/8 (because 1 byte == 8 bits) = 336MBps.
While AMD 290x has 512-bit bus and 5GHz clock rate (I will omit word "effective" in the future).
512*5/8=320. I.e. bus is 33% widther, but bandwidth is lower.
Lets look at Fiji: its VRAM clockrate is only 1GHz.
1*4096/8 = 512 MBps. Only 40% more bandwidth.
But that isn't all. There are several techniques that reduce needed bandwidth. Such as compression and caching. Delta color compression, for instance, is almost the same in AMD and Nvidia GPUs. While Maxwell has more L2 (last level) cache which allows to decrease VRAm bus usage. And so on...
Ursprungligen skrivet av Kid Ayy:
So if you're trying to push that 8gb of GDDR5 on an Nvidia chipset, it's going to be bottlenecking. But that 4gb on the AMD will be used much more efficiently. Moving lots of pixels down those channels is going to be more effective over 16 channels rather than 8 regardless of how much vRAM is there. Which is where there is usually less performance loss at higher resolution with AMD.
Another misconception. Nobody ever needs to read or write all VRAM except during memory tests. And pixels themselves occupy only dozens of MB in VRAM. At any resolution.


this depends on which nvidia to amd chips you are compring... and you did the exact same thing he did on the fiji card with hbm at 4096bit/8=512mb/s only 40% more leaving out all the other factors that determine the performance of hbm memory

http://www.amd.com/en-us/innovations/software-technologies/hbm

Ursprungligen skrivet av vadim:
Ursprungligen skrivet av Kid Ayy:
VRAM amount has nothing to do with it. I'm refering to a 256-336 bit (Nvidia) vs. 512-4096 bit (AMD) bus width. In other words, Nvidia just doesn't have the bus width to keep up with AMD at 4k and over.
That isn't true. First, bus width itself means absolutely nothing. You need to calculate memory bandwidth which is equal effective clock rate * bus width.
GTX 980 Ti has 384-bit bus (there cannot be "336-bit buses". This number is not divisible by 32) and 7GHz effective clock rate. 384*7/8 (because 1 byte == 8 bits) = 336MBps.
While AMD 290x has 512-bit bus and 5GHz clock rate (I will omit word "effective" in the future).
512*5/8=320. I.e. bus is 33% widther, but bandwidth is lower.
Lets look at Fiji: its VRAM clockrate is only 1GHz.
1*4096/8 = 512 MBps. Only 40% more bandwidth.
But that isn't all. There are several techniques that reduce needed bandwidth. Such as compression and caching. Delta color compression, for instance, is almost the same in AMD and Nvidia GPUs. While Maxwell has more L2 (last level) cache which allows to decrease VRAm bus usage. And so on...
Ursprungligen skrivet av Kid Ayy:
So if you're trying to push that 8gb of GDDR5 on an Nvidia chipset, it's going to be bottlenecking. But that 4gb on the AMD will be used much more efficiently. Moving lots of pixels down those channels is going to be more effective over 16 channels rather than 8 regardless of how much vRAM is there. Which is where there is usually less performance loss at higher resolution with AMD.
Another misconception. Nobody ever needs to read or write all VRAM except during memory tests. And pixels themselves occupy only dozens of MB in VRAM. At any resolution.

More or less that's what I was getting at. Average DovahJoe with 20gb of 4k texture mods in Skyrim displaying on a 2k monitor. He's not pushing 4k textures constantly, only what is needed to be loaded. The rest sits in memory until purged. the 2k picture is what's being spit out 60 times a second.

But you do seem to have a better understanding of this tho.

I'm curious tho, if you know. Nvidia has a lot of compression that they use to more effectively handle smaller bus width. I don't know a whole lot about that. Is it always 100% effective?
vadim 3 apr, 2016 @ 19:36 
Ursprungligen skrivet av Kid Ayy:
Nvidia has a lot of compression that they use to more effectively handle smaller bus width. I don't know a whole lot about that. Is it always 100% effective?
Sorry, I'm CUDA programmer. Not 3D. CUDA programs use textures to keep matrixes, so I know about compressing textures. But I never used color compressing and cannot say anything about its effectiveness.
Ursprungligen skrivet av vadim:
Ursprungligen skrivet av Kid Ayy:
Nvidia has a lot of compression that they use to more effectively handle smaller bus width. I don't know a whole lot about that. Is it always 100% effective?
Sorry, I'm CUDA programmer. Not 3D. CUDA programs use textures to keep matrixes, so I know about compressing textures. But I never used color compressing and cannot say anything about its effectiveness.

That definitely explains your knowledge on the subject tho. Thanks for chiming in what that tid-bit and correcting any misconceptions.
< >
Visar 61-75 av 95 kommentarer
Per sida: 1530 50

Datum skrivet: 29 mar, 2016 @ 16:44
Inlägg: 95