Hi, first of all, i really appreciate your job ;)
Here's my problem with latest Nvidia Drivers (gpu degrees).
http://www.pctunerup.com/up/results/...5_Immagine.jpg
Hope you can fix it easily :)
Printable View
Hi, first of all, i really appreciate your job ;)
Here's my problem with latest Nvidia Drivers (gpu degrees).
http://www.pctunerup.com/up/results/...5_Immagine.jpg
Hope you can fix it easily :)
Mine is more funny. :D
http://i27.tinypic.com/29gic7b.jpg
But it's nVidia driver fault not uncle's. Latest beta/dev leaked have some internal changes so temps are funny.
Thanks for the heads up. I'll have to check the latest beta driver and see what's changed.
burebista: I think your GPU has gone nuclear. :D
I have no idea why in Jan 01 I've had only 128°C. :ROTF:
I just installed the 190.89 drivers for Windows 7 x64 and so far my temps are good on my 9800 GTX+.
No weird readings. I downloaded this beta driver from Guru3D.
http://img134.imageshack.us/img134/3060/nvidia19089.png
Hey man,
Is there anything new in the latest beta (compared to 3.00) for us old Q6600 owners? Just wondering... :)
I'm using E8400 with Asus Striker II Extreme.
Do I need to change any settings in Real Temp to make it most accurate when detecting my temps?
Using Real Temp 3.00.
just a Q Uncle: the latest RT can show us the new i7 VID?
KURTZ: To the best of my knowledge, there is still no documented method to read VID from a Core i7 CPU. Not that I necessarily believe them but according to Intel, it's no longer possible to read VID info. Do you know of any software that can do this?
Very little thing but could you add an option to settings to show one or two decimal points for temps? I don't know if this is demanded by anybody else or if it'd be hard but I'd much appreciate it! Thanks!
theGryphon: These sensors seem barely accurate to +/- 1C let alone numbers after the decimal point. The sensors can only be read in whole numbers so there is no data after the decimal point to display. I think RM Clock used to show numbers after the decimal point like 0.2, 0.4, 0.6, 0.8 but I believe it was just averaging the last 5 samples to create that data.
Since the program polls each core fairly regularly, would this prevent the cores which would have otherwise been idle from reaching the deeper C-states making it difficult to record any reductions in temperature when the cores reach those states?
I was just reading about that issue in the Intel documentation the other day. They didn't point the finger at any particular program but were sort of hinting at that issue and programs that need to read data from model specific registers on a regular basis. Maybe I took that info too personally.
My opinion is that even if you stopped running RealTemp, there would still be other background activities that would constantly wake cores up and out of the C3/C6 state. RealTemp isn't a huge load on the CPU but it does need to access each core once every second to query the temperature data for each core.
I'm not sure how you could test for this. You could try turning RealTemp off and then wait a minute and use something like my MSR Tool and try to read the sensors directly in MSR 0x19C without having to start RealTemp. Maybe something like a Kill-a-Watt meter might be able to show a slight reduction in power at the wall but I don't think it will be possible to see any measurable difference of temperature or power, with or without RealTemp running.
I thought about adding an option to RealTemp so it would be possible for it to not read the temperature sensors when idle but then I'd have to query the CPU to see when it's idle. :(
It's a no win situation.
I have a fairly lean system so at idle the RealTemp C0% based load meter gets down to about 0.6% with one or two instances of RealTemp running. Even if RealTemp wasn't running, I think there would still be some background activity running on each core every second.
My thoughts are that if you are running a well overclocked i7 900 series CPU, RealTemp is probably the least of your concerns when it comes to it heating up your core temperature or increasing your power bill. :)
The reason I asked is because I came across a thread on THG where someone was wanting to cut temperatures and power consumption during the 50% of the day the PC sat idle (without powering off IIRC), and obviously EIST and CxE came up in the discussion. While the effects of EIST are fairly obvious, as they are with C1E, I wasn't sure if allowing the deeper C-states would make a measurable difference because it's possible that you don't reach them (or don't remain in them long enough) while monitoring temperatures. I wonder then if such advanced power-saving features are a moot point, considering you'd have to have zero load to stay in that state. It's not always clear if Bloomfield can reach lower power consumption at idle than Core 2 because different reviews say different things, and the platforms as a whole consume different amounts of power as well. Lynnfield seems to come well under both Core 2 and Bloomfield, but that's probably thanks to the less power-hungry chipset.
I believe my board allows me to "demote" C3 and C7 states to higher level ones so perhaps I can try and find a measurable temperature difference when C3 and C7 are used.
For what it's worth, I'm seeing a measurable temp difference between C3 and C6 on Lynnfield. With C3 the lowest idle temps across the 4 cores was 38-39C. C6 is giving me around 29C, though it fluctuates more, with one of the cores occasionally moving up to 38C. This is with 3.30 RC11
Core I7 computer at stock, idle, C1E and EIST Disabled
224-227W from wall
CPU according to everest 17W
Idle temps 35, 36, 35, 31 (ambient 25.9C)
Core I7 computer at stock, idle, C1E and EIST enabled
209-212W from wall (same with Realtemp open or closed, or everest/coretemp open or closed)
CPU according to everest 2.3-4.3W idle (same with realtemp, etc. open or closed)
Idle temps 33, 33, 33, 30
same plus C3/C6/C7 enabled
203-206W from wall (same with realtemp, etc. open or closed)
CPU 1.6W idle (same with realtemp, etc. open or closed)
32, 33, 33, 29 (maybe .4C lower than just EIST/C1E enabled monitoring avg for 3 mins, vs variability)
Overclocked to 4.4ghz
275-279W from wall (same with realtemp open or closed)
CPU 41W (same with realtemp open or closed)
OC 4.4 with EIST, C1E enabled
265-268W (same with realtemp open or closed)
CPU 35W (same with realtemp open or closed)
OC 4.4 with EIST, C1E, and C3/C6/C7 enabled
263-266W (same with realtemp open or closed)
CPU 32W (same with realtemp open or closed)
What I learned from doing this
1) Realtemp, Coretemp, and Everest do not alter the power whether running or not with EIST, C1E, and C3/C6/C7 and hence would not alter the temps. They do not interfere with the small power drop of enabling C3/C6/C7.
2) EIST, C1E, C3/C6/C7 enabled would save me $25 per year if running stock settings or $12 per year if running OCed settings, if I had my computer on 24/7 for 1 year. (less with OC setting since vcore is not reduced on GB boards when OCed)
3) If you want to be collectively green, dont overclock, and use power savings, and turn your computer off or S3 when not using. But only one of those that is going to save an individual any significant money is using S3 or turning computer off when not in use.
4) S3 mode saves me $281 per year (whole computer including monitor, etc). If using S3 or off button, and computer in use 25-50% time, then EIST/C1E/C3,C6,C7 saves ~$10 per year, ie not worth the annoyance of it interfering with benching runs, and that $1 per month no one is going to notice. Want to save money, use S3 or off switch.
5) Enabling C3/C6/C7 will not drops idle temps by any significant amount (typically less 1C, though maybe 1 on stock cooler). ~3-4W vs ?~1.8W, both are going to have minimal 6-7C over ambient idle temp.
In the US, avg cost of power is $0.115 per kilowatt-hour, which is almost exactly $1.00 per Watt-year. So, Leave a 60W bulb on for a year pay $60. Save 10 watts for 1 year save $10. The simple rule is 1 watt for 1 year = 1 dollar.
We can always count on rge to go the extra mile :D My power costs anywhere from 12-35c/kWh depending on the time of the day though, so 24/7 operation is quite costly :(
ok so how do i get it to monitor my gpu temps?? alos i cant get 3.30 RC11 to work within rivatuner?? maybe im just missing something??
RealTemp only monitors Nvidia GPU temps because that's what I own and it was easy to program. By the looks of your avatar, I'll take a wild guess and say that you probably own an ATI card. If so, RealTemp can't read your GPU temps. The latest 5850 and 5870 look very impressive so maybe I'll have to have a second look at supporting AMD/ATI.
The RealTemp / RivaTuner plug in is called RTCore.dll. You need to go into the Settings window of RealTemp, click on the RivaTuner button and tell RealTemp where the RivaTuner.exe is located. Once it knows that, it can install RTCore.dll in the correct RivaTuner directory. Then you start RivaTuner and click on the Hardware Monitoring icon within the upper Customize option. When that opens up some graphs, click on Setup and then click on Plugins and select the RTCore.dll plugin. Once that is selected, click on OK and go back to the Hardware Monitoring setup window where you can decide what graphs you'd like to see from the RTCore.dll. There are quite a few of them so you probably won't want to see them all.
This plugin runs separately from RealTemp so once RivaTuner is running with this plugin installed, you don't need to have RealTemp running as well. Let me know if you have any problems.
radaja: I think I told rge a while ago that whoever figures out how to get VID info out of a Core i7 CPU will become like a god on XS. :rofl:
No luck so far.
I think they'd be a god on just about every forum for beating cpu-z to it :D If it existed, what would it show up like in the MSR tool (for a VID of 1.1025V for example)? My hexadecimal knowledge has faded :(
I've just been working on updating i7 Turbo to better support IDA mode in the Core 2 Mobile chips. I've got a few ideas about better Lynnfield support for RealTemp 3.30 so hopefully I can start working on that tomorrow and adding what I've learned recently.
I'd like to start reporting the highest multiplier when C3/C6 is enabled instead of reporting the average multiplier. On the X58, most boards disabled this feature as soon as you started overclocking so reporting the peak multi or the average multi didn't make any difference. With Lynnfield, there can be a big difference between these two multipliers because they have a lot more bins of turbo boost available when C3/C6 is enabled. It should be interesting. Hopefully by the weekend I should have something ready for testing with improved Lynnfield support. 3.30 RC11 works OK but there is always room for improvement. :)
In this example, I think 23.8 most accurately represents your multiplier. CPU-Z reports 23X in a situation like this which is OK but including the number after the decimal point gives a user a little more useful info. It will help encourage users to get rid of the background junk that is killing performance by preventing their CPU from using its maximum multiplier.
http://img214.imageshack.us/img214/8893/i5750.png
The overall average, 22.6X, doesn't accurately tell you how hard your CPU is actually working which is what RealTemp 3.30 RC11 is reporting. Intel recommends reporting the highest multiplier in these situations and now that I've seen some Lynnfield examples, I agree. On most Core i7 X58 boards, most users won't see any difference from this upcoming change.