Sapphire Radeon NITRO+ RX 470 4GB OC 11256-01-20G Review

Benchmarks, Performance, Temperatues and Power consumption

Benchmarks don’t just come from a video card, you need the system around it, at least as a point of reference, and here are my system specs

Here are the specs of the Sapphire Nitro RX470 within TechPowerUp’s GPU-Z


Yeah I know, it is the 4Gig card, but it’s what Sapphire sent me, so this is what I have to work with.  The Sapphire Radeon RX470 8Gig card I have found to be the same price as the 4gig card sometimes, but I have also seen it to be only about $20 more, so not bad all the either way.

Aside from GPU-Z, Sapphire has their own utility named TRIXX 3.0.  I will go into greater detail of this software a little later on in the review but I want to show you how Sapphire reports this card as well.


Both utilities show that the card is based off of AMD’s Ellesmere GPU which is part of AMD’s next generation “Arctic Islands” FinFET GPU’s.

This card has 2048 shaders, those are processing cards and the Sapphire Radeon NITRO+ RX480 OC card I reviewed earlier has 2304, this is one of the only differences between the cards.  Well, also the 480 I reviewed had 8Gigs of RAM and this 470 has 4Gigs of RAM, but again Sapphire does offer an 8gig model as well, bringing the 480 and 470 closer together.   Remember this, it comes into the picture a little later in the review as well.

For this review I used AMD’s Crimson 16.10.1 drivers, there are newer ones but I wanted to keep things as similar as possible between the reviews.  It is important to update the drivers as they come out as they fix many things in each release and you will get better performance as well.

Ok, so we are about ready for the benchmarks now, but before I get into that I wanted to let you know a bit more on how I review.  During benchmarking the card will require more power than at idle, so to check this to give you a better idea of what kind of power this card takes, I use the “Kill A Watt” by “P3 International”.  I test for Minimum, Average and Max power consumption, but the most important is the Average.


The applications I use to benchmark are the following.

Let’s get things started with 3DMark.




At stock speeds, this is a decent score and you can see in the run detail that this performed 76% better than all other results.  The lowest wattage this reached while benchmarking was 174Watts, the average power consumption was 347Watts and the maximum wattage pulled was 372Watts.  The max temperature the GPU reached was 60°C and this was using only the stock fan profile on the card, TRIXX 3.0 was not running during this or all of the other stock benchmarks.

While 3DMark is a great way to gauge a little of the performance of a video card, it’s not a game, you cannot play 3DMark, so let’s move on to some games starting with Metro Last Light.


Here are my presets for Metro Last Light.  For each benchmark, I will present the presets, changing only the resolutions for each test between the 2560×1440, 1920×1080 and 1280×1024.



Metro Last Light is known as a video card killer, if you pump all the settings to Ultra like I have here.  All games I will be benchmarking will be with all of the eye candy to the max so you can see what the card will do, and with that keep in mind that if you lower the eye candy you will get better performance.  Later in the review, I will show you gameplay as well so that you can compare the differences between the benchmarks and actual gameplay.

At 2560×1440, we see we are getting 19.52FPS at average, by no means is that playable but at 1920×1080 we can see there is a 44.62% improvement in performance at 30.73FPS.  Since we did lower the resolution, resulting is less strain in the card, we actually pulled on average 2Watts less and with that dropped 1°C.  Between 1920×1080 and 1280×1024, we can see a 33.80% improvement in performance again and power draw lowered by 9.41%.

Might not be a great indication of performance, but I think a good way to get things started off.  Let’s see how much of that performance was stolen by THIEF.




There is not one non playable result here in THIEF, and to start it off with we are looking at 2560×1440.  At this resolution we are seeing on average 53FPS, while not 60FPS I would find it very difficult if not impossible to be able to tell the difference but at 1920×1080 we can see 79.6FPS.  Between 2560×1440 and 1920×1080 we can see a 40.12% increase in performance in favor of 1920, in that we can see 2560 pull only 1.89% more power and only a °1C difference in temperature.  At 1280×1024 we can see an increase of 22.12% at 99.4FPS but actually the power draw increase 1.89% and with that a 7.09% increase in temperature on the 1280 side.

If your monitor can handle 2560×1440, you would have no problem at all running with all the eye candy turned on as I have here in these results.  Now, this game is a little dark so we can’t full gauge the eye candy but Tomb Raider is much brighter with a lot of beautiful scenery, so let’s check that out.




Yet again, the Sapphire Radeon NITRO+ RX470 OC keeps the results 100% playable. At 2560×1440 we are seeing on average 61.4FPS and at 1920×1080 we can see 90.8FPS a 38.63% increase in performance.  2560 pulls on average an additional 5Watts of power and actually cools 2°C better.  Between 1920×1080 and 1280×1024, we see a 25.13% increase though a 2°C increase in temperature but a decrease on the average power consumed by 5.07%.

Let’s see what Ashes of the Singularity says about this, this is a little harder on video cards.


Ashes of the Singularity is a DX12 game if you didn’t not know, but there is a little trick to get it to function in DX12. In Steam, when you are about to double click this game to get it started you will want to instead right click on the title.


On the drop down, click “Launch DirectX 12 Version (Windows 10 Only)”, it implies, this will only work in Windows 10.  After that, once it is loaded, here are my settings for your reference.


Also, you know you are benching in DX12 when you get this message when you click “Benchmark”


Notice under “API:” it reads “DirectX 12”.


As you can see, this game heavily utilizes the CPU and picks up where the GPU might be lacking, though it seems like no matter what the CPU dominated.  Watching this game, in all 3 resolutions (2560 did flicker a bit) the game ran smoothly, but there were some spots where there were epic planetary battles occurring.

At 1920×1080, we can see that the GPU performed 19.32% than its higher resolution benchmark at 2560×1440.  1920×1080, pulled 0.55% less power and the temperature decreased by 1°C. Also if you notice at 2560×1440, the CPU pushed a little harder to compensate, 0.38% harder.  At 1280×1024, we can see that it performed 22.03% over 1920×1080 and the CPU also helped a lot more, 8.62% more.

Ok, I had a conversation with Tom Clancy and he feels that The Division might provide more information.


Here are the settings I used.



The Division shows you as well how much CPU it utilizes, but you can see here utilization is relatively low compared to Ashes of the Singularity.  Between the 2560 and 1920 resolutions there is a 41.87% improvement.  At 1280×1024 performance improved by 21.39% over 1920×1080 though at a lower resolution its power consumption increased by 13.78% oddly enough and the temperature also increased 4°C.

1920×1080 was incredibly close to 60FPS and with that ran very smooth.  At 2560×1440, there was an occasional chug but that’s how averages are built but better than discussing it, I will show you how the game performed, as well as a few others.

Share Feedback We Want to Hear From You