Heh, don't worry Kevin, we allways host the latest RealTemp right after you announce it :)
Printable View
Heh, don't worry Kevin, we allways host the latest RealTemp right after you announce it :)
My biggest fear. I screwed something up. I think RealTemp is OK but I'll go do some more Windows 7 x64 testing. I haven't seen a BSOD in many months and never one caused by RealTemp.
Edit: RealTemp seems to work fine on my computer with Windows 7 x64.
Hmmm, maybe you need to be a little more generous with the core voltage. :rofl:Quote:
Intel Xeon W3520 @ 4.1GHz (1.20V)
Are you Prime / LinX stable at 4.1GHz with only 1.20 volts?
Most of the Core i7 D0 CPUs seem to need ~1.28 volts or higher to be stable at this speed.
working fine here at win 7 x64
nice unclewebb ;)
http://rol-co.nl/hwi/gulf/realtemp-wr.jpg
hello.
can anyone tell me what the offset it for and how do I set it?
I mean where do I get the Tj max for my cpu?
QX9650.
My core temps vary by 9 degrees from 0 to 3. I dont get why so much.
The sensors used on the 45nm Core 2 Duo and Quad are horrible. They were never designed to be used to report accurate core temperatures. They have several problems including the problem that they get stuck at lower temperatures. I also believe that TJMax is not consistent across all 4 cores on these CPUs. Whether Intel does this deliberately or whether it's just an act of god, I don't know.
The CPU Cool Down Test in RealTemp is the best way for me to learn about the sensors on your CPU and what combination of issues your sensors have. For the QX9650, Intel published a number which they called TJ Target and they say that number is 95C. Actual TJ Max might be 95C, 100C or even 105C or 110C on some of your cores. There can be a lot of variation from core to core and no way to determine the truth. Post or send me a Cool Down Test and I'll give you my opinion of what I see.
SimpleTECH: All I can do is wait for some more user feedback. I can't ever remember having a RealTemp related BSOD. If you have another BSOD, tell me what the error message says.
Thanks Rol-Co for raising the bar in XS Bench. Not a very exciting benchmark but it quickly told me that your CPU is a beast. At 6.338 seconds some would say very quickly. :up:
which is better for i7 real temp or core temp ???
Hi,
I am sure I am overlooking something obvious, but what?
These two versions do not agree on processor speed. The older one shows the correct value. This difference is pretty consistent over the time, but not constant.
I'm a little biased and I like RealTemp better but Core Temp would be my number 2 choice. It is one of the very few programs that is reading temperature data from all 4 cores when hyper threading is enabled. Many monitoring programs, even popular commercial ones, hint, hint, hint, do not.
Chris Z.: I was wondering how long it would take someone to ask, "What's going on here with these multipliers in RealTemp 3.40?"
Who says that is the correct value? The older version of RealTemp might show what every other monitoring program will show at idle but how do we know that is actually true.Quote:
The older one shows the correct value.
What I discovered at idle is that how you have Windows set up will affect what your multiplier is really doing. Almost all software has decided to over look the truth and instead provide you with a nice steady multiplier even though internally, the multiplier your CPU is using might be in a state of chaos.
I don't have a QX processor to test with so hopefully you can do some testing to try and prove this.
There is another program in the RealTemp download called i7 Turbo. Try running that and RealTemp 3.40 and CPU-Z. When testing, turn off Everest and all other monitoring software because I'm not sure how other programs use the system timers that RealTemp 3.40 and i7 Turbo depend on.
In Windows Vista and Windows 7 there is a setting in the Control Panel -> Power Options called the Minimum processor state that has to be set to a low number to get the multiplier to go down to 6.0 at idle. For testing you can set this to 5%. If this setting does not exist then you might need to go into your bios and enable SpeedStep / EIST.
If you are using Windows XP, then one thing that controls this is also in the Power Options. Click on the Power Schemes tab and try changing the scheme from Home/Office Desk to Portable/Laptop.
The C1E setting also can play a part in what multiplier you end up with at idle. Enable C1E to lower your idle multiplier and disable C1E if you want a high multiplier at idle. This setting might not do anything depending on what you have set in your Power Options.
Once Windows has a chance to settle down, are the multipliers reported by i7 Turbo closer to 6.0? Post a screen shot of those 3 programs so I can have a look.
Now run a program like Prime 95 on all 4 cores and post another screen shot of those 3 monitoring programs.
What I found during development is that the internal timers that accurately record what multiplier your CPU is using, sometimes tell a completely different story than what most software will tell you at idle. I fully trust what the internal timers are telling me. It's possible that I've screwed something up on your QX CPU but for most Core 2 Desktop and Mobile CPUs and Core i7 / i5 / Nehalem CPUs; what RealTemp and i7 Turbo are saying is in fact what's really going on inside your CPU.
If your hardware / bios settings do not agree with your Windows settings, you can end up with a multiplier that constantly hunts up and down at idle between 6.0 and the maximum multiplier. Once you play with your Windows settings, you should have some control over your actual multiplier at idle.
http://img200.imageshack.us/img200/9335/q6600c1e.png
http://img402.imageshack.us/img402/2...ionsminmax.png
I have seen one early QX6700 Engineering Sample CPU that had the internal timers disabled but I don't think that is the problem in this situation.
I believe HWmonitor is now reading correctly. I've just checked it (so I could see my chipset temps as well) and running Prime95 while moving the load around the cores with affinity changes shows normal responsiveness as far as I can tell. Speedfan, last I checked, still thinks there are 8 cores :)
randomizer: How about a screen shot of RealTemp, Core Temp and HWMonitor while running Prime 95 Small FFTs on all 8 threads. Once your core temperatures have stabilized, are all three programs reporting the same now? I've seen quite a few screen shots where HWMonitor is reporting a few degrees different than RealTemp and I'm not sure why. It might have been the older version.
HW Monitor used to interfere with the system timers that RealTemp and i7 Turbo use so RealTemp 3.00 would show some sky high multipliers. Can you see if this bug has been fixed?
Someone was just commenting about HWmonitor reading different temps on OC.net a while ago...so was checking mine. HW monitor does report 4 core temps, but temps are still out to lunch. It reads 2-3C too high at idle and 3-4C too low at load compared to real temp and coretemp. Cant make any sense out of it...thought about some long time rolling average, but at idle, just does not make sense.
RT, coretemp, HWM at idle
http://img22.imageshack.us/img22/5610/hwmonidle.jpg
all 3 at load
http://img22.imageshack.us/img22/8483/hwmonload.jpg
Thanks for the numbers rge. I also saw that post on OC.net about HWMonitor reporting different core temperatures and it just doesn't make any sense. TJMax is written into each Core i7/i5 CPU which RealTemp and Core Temp are reading correctly and RT and CT are both reading the temperature sensors correctly for each core so I'm not sure what HWMonitor is up to. :shrug:
The formula can't get much simpler than this:
Reported Temperature = TJMax - Digital Thermal Sensor reading
You can't argue about those two numbers on the right so all software should be reporting the exact same thing when the core temperatures have been allowed to stabilize. During transitions between idle and full load, it's possible for two different programs to be reading the sensors at a slightly different moment and for two programs to report slightly different temperatures but at full load or at idle, all 4 cores should be almost identical. Core Temp and RealTemp show that.
The only difference between RealTemp and Core Temp would be if the APIC ID reported by RealTemp isn't in numerical order. When that happens, RealTemp sorts the temperature data into the correct physical order while Core Temp does not. This used to be an occasional issue with Core 2 Quads on some motherboards but most of the newer socket 1366 or socket 1156 CPUs and motherboards are usually OK and the bios correctly assigns threads in the correct order.
http://www.fileden.com/files/2008/3/...alTempBeta.zip
This is a bug fix version.
In SLI mode the GPUs should be labeled as GPU 1, GPU 2, GPU 3 etc.
The second bug is more serious and could result in a RealTemp crash when not using an Nvidia card by clicking on the Reset button in RealTemp. Both bugs are now fixed.
Edit: I see a new RealTemp feature on the horizon. :)
http://img22.imageshack.us/img22/5788/newfeature.png
As you requested, below is a comparison of the 3 programs (load being 8 threads of P95 Small FFTs). The results, like rge's are odd, but not as dramatic. Load is consistently 1C below Real Temp on all cores, and would be for Core Temp save for the fraction of a second difference in update times between Real Temp and Core Temp. Idle is almost identical to the other two programs. How is the power consumption for the CPU calculated, just using a simple formula based on TDP, voltage and clock speed? Or is there some way of reading the power consumption? I noticed that rge's is 130W flat consistently.
Also don't look at my chipset temp (TMPIN1), it could shock you. It was 10C higher yesterday since it was 29C ambient :rofl:
Idle
http://i176.photobucket.com/albums/w...rison_idle.png
Load
http://i176.photobucket.com/albums/w...rison_load.png
HWM definitely read Randomizers better. Since programs sample at different times, best way is to compare is max temps under load, they should all be nearly same.
Coretemp and real temp max load are both 68, 68, 68, 66. HWM is 67, 66, 67, 65. Mine is 3-4C off, guy on OC was 2C off, randomizer is 1-2C off. Interestingly, I put my pc back to stock, then HWM was only 2C off at load on 3 cores, 1C off on 4th. Then OCed to 4.55 all were off at least 4C with HWM at load.
Randomizer, can you try OCing your cpu and see if error increases while overclocking like it did on mine.
I remember Unclewebb saying about implementing a G15 plugin in to the Real Temp, one independent to Riva Tuner... this have been already implemented?
I was wondering if increase power cause more temp fluctuations, and HW monitor is doing some type of rolling avg of more than just the hottest core. TAT (intel thermal analysis tool) interfaced with PECI which avg ?more than just the hottest core, so you got some weird avg, which unfortunately did not give you the correct hottest temp, which in protecting hardware is really all that matters. One of HWM updates mentions HECI, so dont know if that is what they are doing, ie something like TAT.
Just trying to see if it is increased power causing increased loading fluctuations or if its just different cpus/different mobo, etc. It is not temp itself, already tried that by turning fans off, increased error is related to increase power dissipated.
Basically just curious wtf HW is doing....clearly not averaging just hottest core, I tried testing for that.
If power consumption is what you want to test, I'll just play with the vcore. I have mine undervolted by 0.16V so by restoring it to stock and even overvolting it should show if power consumption is the reason, or at least a contributing factor. It will be interesting to see how the power consumption reading in HWmonitor changes as well (if it does). Mine doesn't seem to be locked at the TDP like yours.
http://www.fileden.com/files/2008/3/...alTempBeta.zip
http://img169.imageshack.us/img169/1300/realtemp345.png
I've added ATI GPU temperature monitoring to this version. I am using the ADL (AMD Display Library) SDK 2.0 to accomplish this which isn't the most efficient hardware monitoring library.
Update: This seems to work on the 2000, 3000, 4000 and new 5000 series of ATI cards. The above post of my X1950XT was just for an example to show the general layout of the new ATI Information window.
On some GPUs in some situations, the performance of this library is terrible. On slower cards, I wouldn't recommend sampling the ATI GPU temperature sensor more than once every 5 seconds. RealTemp also includes the ability to easily temporarily disable this feature if you want to run any A / B comparisons or benchmarks.
If this new feature works for you without any issues then that's great. If not, you can always turn it off completely by using the INI option:
NoGPU=1
I haven't heard of any problems yet on systems with single GPUs. If you are using a x2 or any unique CrossFire version then some feedback would be great. RealTemp is designed to report the highest temperature of the hottest GPU in CrossFire mode in the button on the main GUI. This button opens up another window where you are supposed to be able to monitor each GPU individually. With multi monitors it might label things funny like, GPU 1, GPU 2, GPU 3, GPU 4 but if you have two GPUs in your system then GPU 1 GPU 2 should belong to the first GPU and GPU 3 GPU 4 should belong to the second GPU. I wish I had some more time and some actual ATI hardware to test on but I've had to wing it because my new 5750 crapped out half way through development. I decided to get a refund rather than try my luck again. I might wait for a 5850 next time I'm feeling lucky.
On Nvidia systems, RealTemp can report the GPU temperatures for each GPU in SLI mode but if you are not in SLI mode, then it can't report both GPUs. That's a work in progress.
One user had an ATI card as his main card and an Nvidia card as his second card. RealTemp can't correctly handle situations like that either. It's one or the other at the moment.
I added another INI option that might help a little in the above situation:
NoATIGPU=1
That might allow only the Nvidia GPU to be detected and reported in RealTemp so you can choose what one you want to see.
If I had some more hardware to recreate the infinite number of combinations I could probably do better but I don't. :(
If this new feature is of any use to you then post a screen shot. :up:
Of course. 2600XT. :D
http://i46.tinypic.com/qn30n4.jpg