Benchmarks, Performance, Temperatures and power consumption
Before we get into the benchmarks, here are my system specs
- Anidees AI Crystal Case: https://geni.us/6NAIJBN?ygdb
- Intel Core i7 5930K Processor: https://geni.us/6NAIJBN?4C8Itd
- EVGA X99 Classified Motherboard: https://geni.us/6NAIJBN?E9eamo
- Arctic Liquid Freezer 240MM CPU Liquid Cooling: https://geni.us/6NAIJBN?vEaJAf
- Kingston HyperX Predator 3000Mhz 16Gig: https://geni.us/6NAIJBN?w9kPe5
- Sapphire Nitro RX 480 Video card: https://geni.us/6NAIJBN?zeF3
- Samsung 850 EVO 500GB SSD: https://geni.us/6NAIJBN?1gf0fs
- Hitachi 1TB SATA 3G HD: https://geni.us/6NAIJBN?pU2QOo
- Patriot Ignite 480GB SSD: https://geni.us/6NAIJBN?eoPVsG
- Kingston HyperX 240GB SSD: https://geni.us/6NAIJBN?8leEDW
- Plextor 256GB PCIE SSD: https://geni.us/6NAIJBN?gVBR
- Cooler Master Silent Pro Gold 1200W Power Supply: https://geni.us/6NAIJBN?Umwm
- Microsoft Windows 10 Professional: https://geni.us/6NAIJBN?GYbBRY
Here are the specs of the card displayed in TechPowerUp’s GPU-Z, just to give you a little more visual perspective of it.
Sapphire also has their own utility, which is very handy, it is TRIXX 3.0. I will go into that software a little later in the review, because we will need it for a few things.
you can find Sapphire’s TRIXX 3.0 here: http://www.sapphiretech.com/catapage_tech.asp?cataid=291&lang=eng
The Sapphire Radeon NITRO+ RX 480GB card is based off of AMD’s Ellesmere GPU and of course Polaris. This comes with a new 14nm FinFET process improving performance and lowering power requirements of AMD’s cards.
I had to cheat a little here and show you some of TRIXX 3.0
GPU-Z was not allowing me to show you the number of Compute Shaders, but here you go, 2304 of them.
As you can see from GPU-Z’s readout I am using GPU-Z’s I am using AMD’s video driver version 16.300.2511.1001, which is Crimson version 16.8.2. Once AMD released Crimson about a year ago, their performance has been greatly improving.
Before I start discussing the benchmarking process, I wanted to let you know I also check the wattage use of the entire system with the card. I test for Minimum, Average and Max usage using the “Kill A Watt” by “P3 International”.
The programs I am using to benchmark are the following.
- FutureMark’s 3DMark Fire Strike
- Metro Last Light
- Tomb Raider
- Ashes of Singularity
- Tom Clancy’s The Division
A very decent score and as it states 82% better than other results, pretty cool. The lowest wattage reached while this was benchmarking was 226Watts, average was 350Watts and the maximum watts pulled was 399Watts. Also, the GPU reached a max of 83°C, this was not utilizing TRIXX 3.0’s fan adjusting utility, all stock.
3DMark states that the drivers are not approved because Crimson drivers are typically beta drivers, but I grabbed the latest and greatest.
Alright, let’s get to some Metro Last Light
Here are my Metro Last Light presets. For each benchmark I will present the presets, changing on the resolutions.
The only difference between “Preset 2”, “Preset 0” and “Preset 1” is changing the resolution between 2560×1440, 1920×1080 and 1280×1024. Below are the results from each.
The performance here I know looks a little rough, but actually Metro Last Light is a card killer; it can take down the toughest of cards. I have been using this game to bench for a few years and not many cards can make this budge,… though there have been a few. One thing to note about Metro Last Light, and all of the other games I will be benchmarking here, they are set to Max settings, Very High, High or Ultra, depending on the individual settings on where they can go. That being said, if Metro Last Light is one of your favorites, you can turn down just a little bit of eye candy and get the performance you want, as high of a resolution you want.
There is a 6.94% decrease in performance from 1920×1080 to 2560×1440, though oddly enough the average power consumption also goes down 2.33%, usually it would go higher especially since the temperature was raised from 79°C to 81°C, a 2.5% difference.
Ok, let’s see if THIEF can silently makes its way into the arena and take the cake.
Thief gets much more playable at 2560×1440 at 58.3, slow close to the magical and sometimes mystical 60FPS. At 1920×1080 we can see 82FPS, that’s a 33.79% improvement from 2560×1440 and even the wattage difference between the 2 is impressive at 21.57% in favor of 1920×1080. For 1.7FPS from 60FPs, I would still play at 2560×1440 myself.
Laura might have a different opinion though, let’s check with her.
Laura joins the Thief guild in playability as the FPS jumps up to 95.1FPs at 2560 x 1440 at even better than that at 1920×1080 jumps to 155.4, that’s a 48.14% increase dropping resolutions. Mind you, if your monitor can handle 2560×1440, I would choose that resolution so it is 100% playable at 95.1FPS and there is only an 8°C heat increase and actually 1 Watt decrease. Very impressive here but these may need have more research done, I believe that the technology Ashes of the Singularity has may be able to calculate this correctly.
Ashes of the Singularity is a DX12 game if you did not know, but there is a little trick to get it to function in DX12. In Steam, when you about to double click this game to get it started, actually right click on the title
There you will see a drop down and you will find “Launch DirectX 12 Version (Windows 10 Only)”, then also as it implies, this will only work in Windows 10. After that, once it is loaded, here are my settings.
Also, you know you are benching in DX12 when you get this message when you click “Benchmark”
Notice under “API:” it reads “DirectX 12”.
Now this game HEAVILY utilizes the CPU. As I was benchmarking this game, I watched it and it was smooth, there were a few tiny spots where it did slightly chug a bit, but mostly smooth. You can see that actually in the graph above. The settings I have are incredibly high, “Use Compressed textures” might help in the performance as well, but I wanted to make this card sweat a bit.
2560×1440 shows a GPU performance of 18.4FPs and 1920×1080 shows 22.6FPs, but the CPU offsets this a tiny bit.
The performance difference GPU wise was in favor of the 1920×1080 and shows a 20.49% improvement over 2560×1440. Here is where the CPU took charge a bit and the CPU performed better under 2560×1440 with a 3.32% improvement over 1920×1080. This is definitely a very interesting benchmark; we can see that the CPU might be a little more important than the GPU, though surely if you lowered some of the settings under “Video” you would surely get better performance.
OK, let’s jump to Tom Clancy’s The Division. I bought this game to do benchmarking and found it pretty cool. Now I just need time to actually play it.
Here are the settings I defaulted at, again afterwards only changing the resolutions. There are a ton of settings.
The Division does show you how much CPU it utilizes, but you can see here utilization is relatively low, now the benchmarks here are a bit surprising. I watch these benchmarks as they run to see if I see any artifacting, and while I saw none, it felt like everything was smooth. From 2560 to 1920 resolutions there is a 32.94% improvement, simply dropping resolutions. The game did play very well, but yeah there were a few spots where it chugged a little tiny bit.
It is very difficult to actually discuss what I saw though aside from what I have already, but actually, I think it might be best to show you exactly what I found.
This next chapter will show you some game play.