My 3060 ti not hitting maximum TDP (100%) in tests and gaming

Oct 11, 2022
9
13
15
Visit site
In FurMark, I get an average of 169 FPS with GPU Benchmark 1080p. The GPU frequency during testing in Furmark does not exceed 1410 Mhz, judging by the graphics. When viewed in GPU-Z, the frequency drops from 1740 to 1410. If I turn on the 720p test, the frequency rises to 1515 mhz. In MSi Kombustor, when testing at 1080p, the GPU frequency rises to 1780-1800 Mhz with an average fps of 62 frames. But it confuses me that in all test scenarios my TDP does not exceed 93-94%. Sometimes it goes up to 96%, for a second. I've tried raising the power limit in Afterburner to 104%, but that didn't help. I also turned on the preferred speed mode to the maximum in the nvidia settings, this also did not help. I have a 650W Masterwatt power supply. When I look at the test results of other people, they have a maximum TDP of 100 to 102%. I checked the voltage on the 12V line in AIDA64 (I know that this is not entirely correct), but when the video card is loaded with 12V, the line shows 11.800V, which is normal. If measured with a tester, then the difference can be 0.1-0.2v. So what could be the reason?

Just in case, my GPU is Gigabyte 3060 ti Gaming Oc LHR (ver 2.0)
eQdMoz0yO9w.jpg

vdyCdyJMcaM.jpg

qTMqHvuT8mE.jpg
 
  • Like
Reactions: Brian Boru
Most likely there is no problem at all. AFAIK these days GPU's are designed to increase voltage and frequency dynamically depending on temperatures, TDP and stability. It may just be that your card at a certain voltage and frequency hits some limit that for some reason doesnt quite touch the given TDP of the card.

Every GPU chip is different even if its on the same brand and model card and there's always some variance in clockspeeds and voltage tolerance between samples.

Kombuster and Furmark are pretty much the same thing, and video card companies have long programmed their cards to underclock themselves when exposed to those kinds of loads. They arent useful for anything.

Are you having any problems in actual gaming?
 
  • Like
Reactions: Brian Boru
Oct 11, 2022
9
13
15
Visit site
Most likely there is no problem at all. AFAIK these days GPU's are designed to increase voltage and frequency dynamically depending on temperatures, TDP and stability. It may just be that your card at a certain voltage and frequency hits some limit that for some reason doesnt quite touch the given TDP of the card.

Every GPU chip is different even if its on the same brand and model card and there's always some variance in clockspeeds and voltage tolerance between samples.

Kombuster and Furmark are pretty much the same thing, and video card companies have long programmed their cards to underclock themselves when exposed to those kinds of loads. They arent useful for anything.

Are you having any problems in actual gaming?

I think you are absolutely right.
I did a little research and came to the conclusion...
Perhaps this problem is on the part of the BIOS of my videocard and the model range as a whole. Perhaps it has a BIOS, which is also installed in Gigabyte 3060 Ti Pro OC cards, where there is an additional 6-pin power connector. Because of what TDP increases and limits, in turn, also increase. I have a regular Gaming OC, with one 8-pin power connector. And according to the data I found on the internet, my default TDP should be 200W and the maximum is 220W. On my native videocard, judging by Gpu-z, in the NVIDIA BIOS TDP column, the default TDP is 220W, and the maximum is 230W. Because of this, I think the videocard is not loaded at 100% TDP. And, according to the tests that I managed to find on techpowerup, this is the norm for my videocard, they also increased the Power Limit to 104% in Afterburner when testing the video card. Their videocard, as in my case, was loaded at a maximum of 95% by TDP. Regarding Combustor, when testing in this program, the frequency was from 1740 on the core (not higher), so this is a problem in the furmark. Maybe my research will help someone. Although I could be wrong.

pkyP4tjart8.jpg

ZhuQK6UVP0c.jpg

epPPD9xzf_0.jpg

Are you having any problems in actual gaming?
In games, I do not experience any problems, the frequency in the game ranges from 1880 to 1920Mhz.
The only thing I don't understand is why in Afterburner the percentage of power consumption is indicated at 100% -102% when starting a heavy game at ultra graphics settings, using the example of the same cyberpunk 2077. That is, in GPU-Z and other testing programs, the maximum TDP is 93-95 %, and in the game TDP is 100% and even more. I'm a little confused. Perhaps Afterburner reads data from the sensor more accurately. Or is it generally normal for all RTX cards, because the GPU does not allow itself to be burned so that it does not run into max power consumption. In any case, I will be glad to any of your suggestions.

Ap_zy3wVcps.jpg

LptI723z1SA.jpg
 
Last edited:
I have a regular Gaming OC, with one 8-pin power connector. And according to the data I found on the internet, my default TDP should be 200W and the maximum is 220W. On my native videocard, judging by Gpu-z, in the NVIDIA BIOS TDP column, the default TDP is 220W, and the maximum is 230W. Because of this, I think the videocard is not loaded at 100% TDP.

This sounds like most of the reason to me. Even if it wasnt it not like performance scales 1:1 with power used, it can even be the inverse if the voltage goes up too high, so I dont believe you would be missing out on any noticeable performance anyway.
 
  • Like
Reactions: Brian Boru
Oct 11, 2022
9
13
15
Visit site
This sounds like most of the reason to me. Even if it wasnt it not like performance scales 1:1 with power used, it can even be the inverse if the voltage goes up too high, so I dont believe you would be missing out on any noticeable performance anyway.
I think so too. In 3DMark Time Spy I have 10549 score with my 10400f, which is even better than the average of 10439 score with the 3060ti and 10400f. So I don't think I have anything to worry about. Although on one forum a member said that my power supply is garbage, and that it was in it. But I don’t think so, yet my theory seems more correct to me, since I don’t notice any voltage drops, everything works stably, the only thing is that in some programs the TDP load is not 100%, that’s all.
 
Oct 11, 2022
9
13
15
Visit site
From the reviews I can find your PSU isnt top tier but isnt complete garbage either. If its getting towards warrantys end I would definitely recommend replacing it with something newer and better when you have the spare cash. PSU is very important.
Yes, that's what I plan to do. My power supply is a year old, so I don’t think it should fail yet, but for a system like mine, it’s better to take a gold power supply. I really don’t know which of these power supplies is better to buy. They seem to be all about the same in quality, but I would be glad to hear your opinion. CoolerMaster V750 Plus Gold-V2, MSI MPG A850GF, Gigabyte P850GM or DeepCool DQ850-M-V2L. There is also a power supply from ANTEC Earthwatts GOLD PRO 750W, but this is the first time I hear about this manufacturer. I want to take at least 750W for a future upgrade. For example, the processor.
 
Oct 11, 2022
9
13
15
Visit site
This sounds like most of the reason to me. Even if it wasnt it not like performance scales 1:1 with power used, it can even be the inverse if the voltage goes up too high, so I dont believe you would be missing out on any noticeable performance anyway.

My theory was confirmed, my friend. I install a program for extreme testing of the system OCCT. When testing a graphics card, namely the so-called "power test", which allows you to load all the elements of the system to identify problems. And in this testing, graphics card was loaded to the maximum. And for the first time I saw that TDP was loaded at 100%. Namely, at 220W. This fully confirms that the reason is inflated TDP limits in the BIOS of my graphics card. Since for my model, if we are talking about the correct BIOS, the maximum TDP is 220W, which I saw when I ran the test. I have already written to Gigabyte support and am waiting for clarifications, if any. It looks like no one has asked this question before me. But this does not affect performance in any way. Perhaps, when testing with this program, the graphics card does not limit itself in any way in terms of power, as it happens with Furmark and Combustor, where the graphics card simply does not allow extreme load on itself, which is why testing does not accurately determine the maximum power consumption limit of the graphics card.

BzbW4m66T3w.jpg
 
Last edited:
Yes, that's what I plan to do. My power supply is a year old, so I don’t think it should fail yet, but for a system like mine, it’s better to take a gold power supply. I really don’t know which of these power supplies is better to buy. They seem to be all about the same in quality, but I would be glad to hear your opinion. CoolerMaster V750 Plus Gold-V2, MSI MPG A850GF, Gigabyte P850GM or DeepCool DQ850-M-V2L. There is also a power supply from ANTEC Earthwatts GOLD PRO 750W, but this is the first time I hear about this manufacturer. I want to take at least 750W for a future upgrade. For example, the processor.

Gold or platinum only measure efficiency, not necessarily quality. Out of the ones you mentioned there the Coolermaster has the strongest reviews so I'd pick that one.
 
  • Like
Reactions: Brian Boru
Oct 11, 2022
9
13
15
Visit site
Welcome to the forum :)

The old 'Don't have the button' excuse, huh. No worries, I filled in for you—you owe me a downloadable beer!
Ahaha, of course, I owe beer for you:)
But I really don't understand why I don't have a "like" button. The tomshardware forum has this button, but not here.

bDILvrPh1qg.jpg
 
Cool—as in, a pint of Guinness ;)
I really don't understand
This should explain it:

 
Oct 11, 2022
9
13
15
Visit site
Cool—as in, a pint of Guinness ;)
This should explain it:

Thank you! I thought so :)
 
  • Like
Reactions: Brian Boru

TRENDING THREADS