Sapphire Radeon NITRO+ RX460OC 4GB 11257-02-20G Review

Benchmarks, Performance, Temperatures and power consumption

We are influencers and brand affiliates.  This post contains affiliate links, most which go to Amazon and are Geo-Affiliate links to nearest Amazon store.

While the video card is a budget card, the system is a little higher end but this actually might be your scenario.  You might have bought the perfect motherboard, processor a memory… but now you don’t have enough money for a GPU… well this might be the best card to tide you over until you can get the latest and the great,… or maybe you chose to stick with it.

Anyway, check out my system specs to maybe put your own performance with this card into perspective:

Ok, with all that, here are the specs of the card itself displayed on TechPowerUp’s GPU-Z.


Sapphire’s own utility TRIXX 3.0 provides a lot of the same information.


I will use this program later on to overclock the card and perform a few other action plus go into a lot more detail about the program so that you know how to use it.  You can grab a copy of Sapphire’s TRIXX 3.0 here:

The Sapphire Radeon NITRO RX460OC 4GB card is based off of AMD’s Baffin chipset, part of the Polaris line.  This chipset if based off of the AMD’s latest and greatest 14nm FinFET process which greatly improves performance and lowers power requirements of the cards in it’s series, a very welcome set of features.

For some reason, GPU-Z did not allow me to show you the number of Shaders, but thankfully TRIXX does, you can see them above.

After this is the benchmarks, but I wanted to let you know first my testing process.  To begin with, all of my benchmarks have wattage consumption listed.  I test for Minimum, Average and Max power usage using the “Kill A Watt” by “P3 International”, a very handy little tool not only for testing PC power usage, but to see how much power annoying that can plug into a wall can use, can help you lower your power bill.


The programs I am using to benchmark are the following.

Later on, I also show you some gameplay of a few games, but let’s started benchmarking.




Budget GPU or not, the Sapphire Radeon RX460OC pulls up 31% better than all other results, that’s not bad at all.  During the test, the lowest the power usage hit was 152Watts, the average was 238Watts and the max, a blip was 312Watts, that’s not bad at all.  During the test, it hit was 61°C.  Since the GPU hit 61°C, the fans turned on, but they are very quiet, I will show you how loud they are a little later in the review.

Sadly the scores could not be official because the drivers are not WHQL.  The base drivers were, but when AMD ports them over to Crimson for performance, they usually do not send them off to Microsoft for WHQL certification, Microsoft charges for that.

Ok, let’s check out Metro Last Light.


Here are my Metro Last Light presets, for each benchmark I will present the presets, changing only the resolutions.


The only difference between “Preset 2”, “Preset 0” and “Preset 1” is changing the resolution between 2560×1440, 1920×1080 and 1280×1024.  Below are the results from each.


The performance here looks very rough, but you may be surprised to know that Metro taxes even the highest end card, so actually it’s not too bad here.  Mind you at the settings I have laid out here, it is 100% not playable but if you were to take down the eye candy, surely it would be playable.

From 2560×1440 to 1920×1080, we can see that the frame rates increase by 5.67FPS a 43.07% improvement.  Going from 1920 to 1280 we can see a 6FPS increase an improvement of 31.58%.  From 1280 up to 2560 we can see both an increase in power consumption and temperate, a 21.49 increase in temperature and a 16.57% increase in power consumption.

The results here were a little loose since the gameplay was not good at all, but let’s move on to Thief to see if he can steal the show.




Thief gets very comfy showing the goods here now that things are much more playable.  2560×1440 while not playable at 26.3FPS, 1920×1080 comes at a much more reasonable 40.3FPS, peaking at 57.7 on average showing a 42.04% improvement in performance.  Oddly enough, even though the resolution dropped, power consumption on 1920×1080 was actually .75% higher.  From 1280×1024 is almost at the magical 60FPS coming in at 53.3FPS, 27.78% faster than the average on 1920×1080 and consuming .75% less wattage as well.  1280×1024 here is very playable, almost perfectly playable, but if you wanted to play at 1920×1080 all you would need to do is drop some eye candy and surely you would hit 60FPS as well.

This card is not looking too bad, but let’s see if Laura can pump up the score a bit.




Alright, yet another decent score.  At 2560×1440, she only hits 30.2FPS, but at 1920×1080 she is a lot more smooth at 46.1FPS, a 41.48% improvement and only a 1.31% decrease in power consumption.  Now coming in at 1280×1024 we have better than perfect at 61.2FPS, an increase of 28.15% and a decrease in power usage of 1.78%.  Like Thief, all you would need to do here is drop some of the eye candy and you could easily hit 60FPS at 1920.  If your monitor can handle 1920×1080, I would suggest taking advantage of it.  Let’s jump things into warp speed, or not… with Ashes of the Singularity.


For Ashes of the Singularity, I test in DX12 though you cant actually utilize the DX12 feature from inside of the game itself oddly enough, let me show you how I get to it.

From within Steam, right click on the title


There you will see a drop down and you will find “Launch DirectX 12 Version (Windows 10 Only)”, then also as it implies, this will only work in Windows 10.  After that, once it is loaded, here are my settings.


Also, you know you are benching in DX12 when you get this message when you click “Benchmark”


Notice under “API:” it reads “DirectX 12”. 


Ashes of the Singularity, heavily utilizes the CPU, you can see here its pegging the CPU very much.  This game seems to chug a bit on any GPU I have thrown at it; either poorly optimized or truly tortures the GPU.  While a DX12 title, I have not seen a GPU handle it properly.

OK, let’s jump to our final benchmark, Tom Clancy’s The Division.


Here are the settings I defaulted at, again afterwards only changing the resolutions.  There are a ton of settings.



Like Ashes, The Division shows you how much CPU Utilization is occurring, but the reading is a bit more clear.  We can see that this game utilizes both the CPU and GPU a little bit better, though it could be DX11 versus DX12.

We can see at 2560 to 1920 a FPS increase of 37.48% on the average FPS and only 1 more Watt consumed.  It seemed to have fun much better than 30.1FPS but at 1280×1024 the 19.22% increase was noticed, coming in at 36.5FPS.  The mixture of the CPU and GPU allows the game to perform much better than you would think at this FPS.

This next chapter shows a bit of game play, on some other games.

We are influencers and brand affiliates.  This post contains affiliate links, most which go to Amazon and are Geo-Affiliate links to nearest Amazon store.