So Uncle, how do I use the program to show ALL 12 cores on a dual socket westmere(gulftown) system?
I know, don't ya just hate these people that show up with questions like this?:rofl:
Printable View
So Uncle, how do I use the program to show ALL 12 cores on a dual socket westmere(gulftown) system?
I know, don't ya just hate these people that show up with questions like this?:rofl:
You dual Gulftown guys were just the ones I was thinking about when it comes to "too much information."
I guess I'll have to add a switch to RealTemp GT so you can have a look at both CPUs. Maybe someday.
Thank you for the quick reply :). I hear ya about the 6 cores (can't wait). ummm a good reason (long pause...). I guess I'm just lazy ;). I just started "part-time" folding with my main machine (quad GPU's)and I'm still using the machine for simply things while folding (web, email) and it would make it easier, just to take a glance at the right bottom corner, instead of opening up Precision and see all GPU's temps at once. It's just Real Temp uses up very little resources, a big thanks to you.. Alright I tried :)... Thank you
I'll keep your request in mind.
By the way, welcome to XS. :up:
RTCore.dll
Version 3.49
http://www.sendspace.com/file/y8p3cr
Just a few minor updates to the RealTemp - RivaTuner plugin including better Core i5/i7 support when not using hyper threading. It also supports the 6 core Gulftown CPUs. If I ever see a screen shot of that then maybe I'll be motivated to update the plugin to support Dual Gulftowns.
Unzip and copy RTCore.dll to the RivaTuner\PlugIns\Monitoring directory.
Open up the graphs in RivaTuner and click on the Setup button. In the Hardware monitoring Setup window, click on the PlugIns button. Make sure RTCore.dll has a check mark beside it.
The plugin includes a battery level meter for laptop owners.
http://www.intel.com/pressroom/archi...202comp_sm.htm
SANTA CLARA, Calif., Dec. 2, 2009 – Researchers from Intel Labs demonstrated an experimental, 48-core Intel processor, or "single-chip cloud computer," that rethinks many of the approaches used in today's designs for laptops, PCs and servers. This futuristic chip boasts about 10 to 20 times the processing engines inside today's most popular Intel® Core™-branded processors.
----------------------------------------------------------------------------
The system tray isn't big enough to monitor all of them.
I don't even want to think about two of these on the same board. :rofl:
Kevin...I think they just got you :D
BTW - I have an H55 board here but I see no GPU temp in RealTemp :D
Can you not make that Real Temp Auto detect how many cores a cpu has!! Because this is just starting..48cores is nothing..there will come even 256cores..and i have chess engines who are ready to handle 2048cores!!
I have chess engines who has no core limitation..so how more cores it sees how better :)
This is maybe a tip for future programming into your great tool!!
JP.
Uncle, is it possible to have a G15/19 integration?
I'll think about it. The RealTemp / RivaTuner plugin works good for this. You can use RivaTuner instead of RealTemp and the RealTemp plugin will show your Intel temps on a G15. I was doing that for a while so I wasn't sure if G15 support was really needed. I'll put it on the things to maybe do list.
I was doing some beta testing tonight and one user with a newer 5870 card was having problems with RealTemp crashing. If you have any sort of ATI card or cards in your system, could you post a screen shot of the RealTemp GPU window and let me know if this feature works. I haven't seen any problems when testing on Nvidia hardware. I'm just trying to find out how many users this problem is happening to. So far it gets one thumbs up from a 4670 owner.
http://www.sendspace.com/file/5100no
v 3.40 bug: RealTemp shows 0.00 Mhz as CPU freq, CPU is Atom 330, v 3.00 worked fine. It also say "133.86 x 0.0", so there's a bug when reading the multiplier.
http://www.stickledown.co.uk/ATI 5850.jpg
Is this what you need Kevin?
I have a Sapphire ATI 5850 and it all seems to work OK.
Between version 3.46 and v3.5 I have lost the ability to display the GPU temp in the Tray Info. The CPU displays OK but the GPU temp will not stick with the 'Show icon and notifications' setting. (it was working OK with 3.46 before I went to 3.5) Whatever I select it reverts to 'Only show notifications'.
(Win 764 Pro)
Hi Kevin
Once again fantastic software.
Just noticed that a little issue with RealTemp 3.50. With my Quad Core Extreme Processor, if I adjust the multiplier to 9.5 or something the %load value does not read as 100% when the processor is loaded. For example a multiplier of 9.5 gives a % load of 97.4% even though the processor is @ 100% load.
The clock speed is also read incorrectly too as RealTemp says that 9.5 multi = 10 multi.
Thanks
John
ExBrat72: Thanks for the screen shot. One user had a pair of 5870 cards in Crossfire and was having problems with RealTemp starting up. RealTemp seems to work fine with single ATI cards or with x2 cards. He's going to do a W7 reinstall to see if that makes any difference since he was having a few other issues besides RealTemp.
Thanks JohnZS. Those Extreme processors always give me headaches. I'll send you a test version to see if we can finally fix this issue for you.
ekerazha: Recent versions of RealTemp have been switched to using the internal timers to determine the multiplier. This is a more accurate method except when a CPU like your Atom doesn't have any of these timers. Try adding this to your RealTemp.ini file:
MSRMulti=1
Hopefully that will use the original method. I'll try to make this automatic in future versions so if RealTemp doesn't find any working timers, it will automatically switch to this method.
I kind of like that quote John. :up:Quote:
Any problem solved is a new problem made.
I can't remember changing anything that would cause this but I'll look into it. Does this option still exist in the RealTemp Settings window? Maybe I can blame this on Windows 7. :)Quote:
Between version 3.46 and v3.5 I have lost the ability to display the GPU temp in the Tray Info.
Thanks Kevin
Ah yes those Extreme Editions can be a pain, if it isn't the extreme sensors it is the manic multipliers!
Intel certainly like to keep you on your toes ;)
That quote "Any problem solved is a new problem made" rings true for pretty much everything ;)
Thanks once again for your efforts and hardwork
John
I'm new to this, and 150pages is a bit much for me to read...
HWMonitor shows my load temps as 52°C, Real Temp shows 42°C. Which one do I believe?
Using RealTemp version either 3.00 or 3.40, can't remember. Will check when I get home
Post a screen shot of RealTemp and tell me some details about your processor in case the screen shot isn't obvious enough and I will tell you why different programs are reporting your core temperature differently. Intel didn't do a great job documenting how to get accurate core temperatures out of their CPUs so different programmers have made different assumptions about the specifications of your CPU.
Hi Uncle. I am having the same "problem" as mejobloggers. Its on a laptop with a Pentium T5200. It definitely doesnt feel like 74C because when my other laptop with RealTemp hits that high, I can feel heat emitting from the exhaust fan and keyboard.
http://i18.photobucket.com/albums/b1...al/Capture.jpg
This was taken with it sitting on an Antec Laptop cooler with 2x60mm fans spinning on "high."
Here is the Intel Spec for a T5200.
http://processorfinder.intel.com/det...px?sSpec=SL9VP
Thermal Specification: 100°C
Another name for thermal specification on the mobile processors is TJMax or the maximum junction temperature. Intel Core processors don't have a typical thermometer in them that software can read. They have a reverse thermometer that counts down to zero as the CPU gets hotter. On these early CPUs, there is no way to read TJMax from the CPU. All you can do is look it up on the Intel site. It's possible the Intel information I posted is wrong but for the early laptops with a CPUID = 6F6, TJMax = 100C is usually right.
Here's the formula used:
Reported Temperature = TJMax - Digital Thermal Sensor Reading
The raw data coming from the digital thermal sensor is reported in RealTemp in the Distance to TJMax boxes.
In your picture the formula would be:
Reported Temperature = 100 - 28
Reported Temperature = 72C
It's obvious that HWMonitor is using TJMax = 85C in this calculation:
Reported Temperature = 85 - 28
Reported Temperature = 57C
You have to remember that this temperature is the temperature at the hottest spot on the core and dissipates rapidly from that point. 72C on the core is not going to translate to 72C on the bottom of the laptop. Many laptops have very marginal cooling systems. They don't have the room for a large heatsink and fan so they get hot. The heatsink may not be attached snugly to the CPU in some laptops. If this is the case then the CPU can get very hot within seconds of pushing the power button.
I pulled my laptop apart and shoved a penny in there to take up some of the slack so the heatsink could make better contact with the CPU. If you pull your laptop apart and it is easy to wiggle the heatsink back and forth then that could be part of the problem with the high core temperatures you are seeing.
My wife had a Dell T7200 laptop and it used TJMax = 100C and the reported temperatures didn't look too unusual at all for a laptop.
If you are using Windows 7 or Vista, there is a setting in the Control Panel -> Power Options called Minimum processor state. That needs to be set to a low number like 5% to get your CPU to idle down properly. The CPU might be getting more core voltage than it should be at idle if this is not set correctly. In XP set the Power Schemes to Mobile / laptop processor to control this.
My opinion is that RealTemp and the Intel documentation is correct for your CPU. If you think TJMax should be set to 85 then you can manually do that in the Settings window of RealTemp but I think your reported core temperature will be less accurate if you do.
For the Q6600 B3, RealTemp by default should be using TJMax = 90C. If some other software decides to use TJMax = 100C for that CPU then it will report a core temperature 10C higher than what RealTemp reports.
I went with 90C for the B3 because the B3 consists internally of two E6600 B2 cores. The E6600 CPU when tested has a TJMax = 90C. No other software to the best of my knowledge agrees with that.
Intel says that they increased TJMax by 10C when they went from the Q6600 B3 to the G0. Most Q6600 G0 owners would agree that TJMax=100C for their CPUs so 90C for the B3 sounds reasonable to me.
TJMax is full of misinformation. You'll have to decide for your self what value is most credible. I've seen some Q6600 CPUs where core 2 and core 3 seem to be set 5C higher than core 0 and core 1 so your CPU might actually be close to TJMax = 90, 90, 95, 95
Post a RealTemp CPU Cool Down Test and I'll have a look. Include your room temperature and let me know if your case is open or closed.
I can't find the "RealTemp CPU Cool Down Test"
But here are the idle/load temps. Side of case is off
Room temp: 22 deg C
Cooler: Megahalem
Settings: stock, everything on auto
Thanks :)
Your sensors look fine. These 65nm sensors didn't have the sticking issues that the newer 45nm sensors have. You would have to heat your CPU up to about 70C before I could be absolutely sure but it looks like TJMax is fairly consistent across all 4 cores. Actual TJMax might be 2C higher on core 2 and maybe 1C higher on core 3. Intel agrees that due to manufacturing variances, TJMax is not 100% consistent from one CPU or core to the next but have never released any information about an error specification like +/- 5C, etc.
A Quad core with a good cooler like a Megahalem should idle about 7C or 8C above room temperature which in your case would be about 30C. Core 0 tends to be the most accurate and in your screen shot of RealTemp at 1600 MHz and 1.10 volts, it is showing 28C which looks reasonable to me. All of these sensors have some slope error and if they read a couple of degrees too low at idle, that's not unusual at all. They weren't designed to report 100% accurate idle temperatures. They are much more accurate at 70C and beyond.
HWMonitor has decided to assume that TJMax=100C.
Reported Temperature = TJMax - Digital Sensor Reading
Both programs are reading data from the same digital sensor. It's obvious that if you assume a higher TJMax value that the reported temperature will also increase by the same amount. RealTemp uses TJMax=90C for a Q6600 B3 so HWMonitor will typically report temperatures 10C higher.
Flip a coin and use whatever TJMax value you think is correct. You can even split the difference and use 95C. Personally, I wouldn't do that because I believe that core 0 and core 1 and core 3 are likely very close to 90C and core 2 is very close to 92C.
I don't believe a lot that was said at Intel's Developer Forums last year when it comes to TJMax but their updated list of TJMax values shows a Q6600 B3 has a TJ Target = 90C.
I believe that one. :)
I am wondering if Intel's engineering department ever read the specs for TjMax ;)
Or was there such a large acceptable % of error on this equation?! but providing most CPU's fitted the line/slop then that was the given value.
Anyway Any news on the QX Multipliers and % load engima?
John
I just had a look at the Intel docs and most Atom CPUs are listed with a thermal specification of 90C. The Atom 230 and 330 however show a thermal specification of 85.2C.
As with most Intel documentation, it's unclear whether this is a TJMax value, a Tcase max value or maybe a TJ target value or maybe just a mistake. Who knows. Actual TJMax might be 85C or 90C or 100C or any number in between. I've never tested an Atom so 90C was my best guess.
Here's the list of Atom CPUs on Intel's website:
http://processorfinder.intel.com/Lis...008&SearchKey=
I don't think that actual TJMax is only 85C for your Atom 330 so I'm going to leave RealTemp as is. You can manually adjust it to 85C if you think the Intel docs are accurate. With a passively cooled CPU or one with a tiny heatsink and small fan, I don't have any method to try and calibrate these.
Don't know where to post but is OCCT 3.10 the last version? And MEMTEST is grayed :/
what should be the calibration/tjmax settings for i7?
as there seems to be many variation of screenshots showing i7 in diff temps
For Nehalem/Lynnfield you don't need to calibrate TJMax because is written in MSR.
how about bloomfield, do we need to calibrate
Uncle,
Before you kill me, I would like your input on how to calibrate realtemp properly. I seem to be having differences in temperature readings from RealTemp and HW Monitor. Running the sensor test now and will post a screenshot.
Shoot now :).
***removed photo because it affected the layout****
What do you advise? Do I offset the cores by a few degrees?
Mirror for RealTemp on my RS Account:
http://rapidshare.com/files/32808805...pBeta.zip.html
We had this discussion before here or somewhere else and I have no idea what HW Monitor is reporting and why its temperatures are different. All software should be reading the same sensors and should be reading TJMax directly from each CPU so there's no reason why any software should be reporting something different. Core Temp and RealTemp are usually exactly the same. HW Monitor is different but I don't know why.
Your first 3 cores look very close. I'd leave them as is. The majority of Core i7-920 CPUs that I've seen seem to have a 5C difference in TJMax between core 3 and core 0. Yours look more like about 3C. You could set that one to 103C or leave it as is and keep in mind that it's probably not 100% accurate. For all of these, I'd trust core 0 more than any of the other cores. These sensors are excellent out of the box so there's no use getting too technical trying to calibrate them.
My 2 cents. Calibrate HWMonitor because your Nehalem has a TJMax 100 and it's written in MSR so what RealTemp shows you is for real and what HWMonitor shows is something else but I'm not sure what. :D
What's CoreTemp or Everest readings?
LE: Kevin was fast. :)
Cheers for the support lads. I won't touch them :). HAPPY NEW YEAR EVERYONE!
Have you tried Everest 5.30.1977 or newer? It has changed the way it reads and shows temps.
http://www.lavalys.com/support.php?lang=en
I think there is a second way to read core temperatures from these CPUs which does some sort of averaging. The method RealTemp and Core Temp uses is to read the instantaneous temperature and report that.
Other programs might be using this second method because their reported temperatures do not directly correspond with what the temperature register is showing.
Hey Unclewebb I got one for you with my 4870x2....
You got me excited when you mentioned that you added ATI card support. I downloaded RT 3.5 but when I try to run RT no window shows up. I can open up process explorer and see it though its not using resources like it is just sitting there idle. If I try to "bring window to the front" pe tells me there is no window. I can go into the ini file and activate the"nogpu" line and then it will open up without a problem.
I found 3.49 and don't have any problems with RT and it shows my first gpu just fine. If i open the gpu window up and click to the second gpu temp, it says gpu 3 but the temps don't change. I can click it again and go back to gpu 1 but the temps still don't change. I can close the window and the gpu temp in RT will work fine.
This is on my system in my sig. The card is actually in the 3rd slot to give the D14 some breathing room. I tried moving it back to the first slot to see if it would change anything.....it didn't.
If I can help you let me know. Not a big deal for me but am looking forward to when you get it working the way you meant for it to be.
The ATI code definitely needs some work in RealTemp, especially when using CrossFire. The 5750 card I bought for development purposes was such a buggy, crashing POS that I returned it and got a refund rather than try my luck again. Hardware and software both seemed to have some issues so I went back to Nvidia.
I should have some time next week to have another look at this but without hardware to test on I'm not sure how much progress I can make. Thanks for the offer to help. I'll try to get back into this project next week.
I've been working on a different project the last few weeks called ThrottleStop which is designed to help Dell's throttling laptop issues. I added support for the Alienware M15x today. Anyone who has purchased a Dell laptop in the last year needs to do some testing. They have quite a few models that throttle like crazy and slow down to a crawl when fully loaded.
http://forum.notebookreview.com/show...postcount=2144
Hey uncle,
With the latest RT version, my Core 3 keeps on disappearing from my taskbar :(. If I run RT all 4 cores show up but after a while, one of them, Core 3, is hidden by W7 due to inactivity. I specified that I don't want that but W7 keeps on doing it. I did not have this problem with the previous version.
On a side note,
RT does not seem to recognise my CFX configuration. When I select a different GPU the temps remain unchanged. This is not right because one of them is supposed to run hotter because it is kind of suffocated.
http://i266.photobucket.com/albums/i...byte/CFX-1.jpg
I'm innocent. I can't remember doing anything recently to RealTemp that would change how the system tray works. I'll see if there is anything I can do differently to work around this new Windows 7 feature but so far no one has reported this problem so I'm not sure what's going on. Are you sure you have the Windows 7 System Tray notification stuff set up correctly?
I'm happy to see that RealTemp 3.50 at least runs on your ATI CrossFire system without blowing up.
Here's the theory. When running CrossFire, one of your GPUs is supposed to go to sleep to save power at idle. In 2D mode, you don't need an uber GPU setup to move the mouse pointer around on the screen. To monitor the core temperature of each GPU, I would need to make sure that both GPUs are awake which would kill this power saving feature. I didn't think that would be a good idea but most users aren't happy when RealTemp only reports the temperature for one GPU when at the Desktop.
When you start gaming and both GPUs are actively working, RealTemp is designed to monitor both GPUs and collect temperature data for both of them and should report the highest GPU temperature in the GPU button on the main screen. That's the theory.
Does this work correctly? At the moment I don't know. Without having access to an ATI GPU let alone a pair of them in CrossFire mode makes development a pain in the butt.
You could try running a GPU-Z log while gaming and compare the peak GPU core temperature to what RealTemp logs to see if RealTemp really is reading both GPUs when in 3D and reporting the highest GPU core temperature. It should but I don't have a way to test this.
I might try to redesign this and let RealTemp wake up both cores while in 2D so it can read and report the temperature of both cores. There's not really a practical reason to do this since your GPU core temperature shouldn't be an issue in 2D but I think most enthusiasts would be happier if I said to hell with power saving and went for more reporting. At least there's a reason why RealTemp does weird things on CrossFire systems.
I was hoping to get back to work on RealTemp last week but I'm still having fun on the Notebook forum working on my new ThrottleStop program. Maybe next week. I'll put your name on my CrossFire testers list. Hopefully with some help I can make some improvements.
Uncle,
I managed to sort out the Tray Issue with a full windows 7 re-install. I was having issues with CFX in Dirt 2 and after 20 million Catalyst drivers installed and removed, I had to format :rofl:. Windows 7 was the culprit not RT. Regarding CrossFire, behaviour of the 2 cards is influenced directly by the Catalyst. For example there is a certain version floating around that completely shuts down the second card. It blocks the fan at 19 % and gives it a more aggressive power saving option. For those who want to try it, 9.12 Hotfix. This is an interesting feature but as a user I don't like losing control of my card. I've no power on fan or anything, until, as you correctly stated, I run a game and CFX kicks in.
Other Catalyst drivers such as 9.10/9.11/9.12/ (beta) 10.1 which is a modified 9.12 with CFX profiles, put the second card in sleep mode but do not take control away from the user. I can still set fan speed and, influence, the temps, whereas with the 9.12 hotfix, the one I mentioned above, I can't do anything. CCC decides to adjust speeds and the 4890s tend to get hot. I don't like it when its idling at 68 degrees because CCC keeps the fan at 19 %. Amazing power saving feature, innit?
I will gladly offer support because it is the least I can do. You've put so much in RT and I know it is annoying when someone comes with a few issues and starts whining. Let me know what you want me to do and I will do my best to offer feedback. It would be great to have 1 app to monitor everything. It is rather annoying that I have to run MSI Afterburner for GPU only. Do you think there is a way to implement in RT a feature similar to the MSI Afterburner one where you have your GPU, for RT CPU, temps displayed while gaming? That would be great :clap:
When working with ATI drivers and software, I gave up early in the project because it seemed that not all the bugs were being caused by me. I still scratch my head when I see RealTemp working fine for one ATI CrossFire combo but another user reports that the same set up causes RealTemp to crash hard. :shrug:
I'll try to put some more time into it this week to see if we can figure anything out.
You could always try using the RealTemp RivaTuner plugin. Instead of running RealTemp you just run RivaTuner instead and with the plugin you can see all of the RealTemp data on screen while gaming.
When my X3440 gets above 75°C, I noticed that it begins to throttle and lower its multiplier.
I thought these CPUs could handle the heat, but mine seems to have some issues when it gets towards those temperatures. :/
All power saving modes are disabled and 'High Performance' setting was used in Windows 7.
The weird thing that is pondering me is why my Processor Power Management doesn't give me the minimum and maximum power state. Is this normal?
You usually need to have EIST / SpeedStep enabled in the bios for the Minimum and Maximum processor state to show up in the Control Panel -> Power Options. Some motherboards need EIST enabled for turbo boost to work properly too since the two are related. Once EIST is enabled, disable C1E and set the Minimum to 100% to keep the multi from dropping down at idle if that's what you want to happen.
Thermal throttling is when the multiplier drops down to its minimum which I think is 9.0X for an X3440. Turbo throttling happens when your multiplier starts dropping down to its default value. Turbo boost is dependent on core temperature so somewhere around 80C, some motherboards and CPUs will disable all turbo boost leaving you with a 19X multiplier. Some motherboards let you bypass turbo throttling based on power consumption or core temperature while other boards do not.
Arg, I tried enabling SpeedStep and set the Minimum to 100% but I am still getting the multiplier to throttle.
http://img46.imageshack.us/img46/9728/99930008.png
Now in my BIOS, there is a setting called CPU Thermal Throttling. I'm 99.9% sure that is what is causing this. The only problem is that it is grayed out and it is set to Enabled. :shrug:
Edit: It looks like this, except that it is grayed out.
http://www.pcshoptalk.com/staff/stef...8xtreme/23.jpg
Based on the 13.5% load reported by Real Temp, I'd wager you're simply seeing the near-idle periods between LINPACK iterations. LINPACK-based stability tests like LinX and Intel Burn Test have cyclic loads which last for a few minutes at 100% and then have a few seconds of almost no load in between. It's extremely regular. Check out these temp graphs of mine (concentrate on changes in temp, as the absolute temps are wrong), they give you an idea of how much load drops at regular intervals:
http://i176.photobucket.com/albums/w...pack_temps.png
Also, it's a good thing that Thermal Throttling is greyed out. Don't disable it :)
Yeah I know that. It's when my CPU downclocks that is worrying me since I cannot tell if my CPU is stable or not during testing. One run might be 45.2 GFlops when it is running normal but once it begins to throttle it's 36 GFlops (or whatever).
I'm going to go ahead and reset my HSF. I have a weird feeling that the reason it is downclocking is because of a faulty sensor.
This topic was brought up in the Notebook forum I've been hanging out at recently. It might be a Windows 7 feature / issue where it is ignoring the Minimum processor state when you set it to 100%. This used to work fine in Vista and was the best way to keep your multiplier from sagging down. It still works in Windows 7 but it doesn't seem to work this way on all boards.
When this is set to 100%, at idle, your multiplier should not be dropping down below the X3440 default of 19.0, ever. With turbo boost enabled, the minimum multiplier should be 20. That's obviously not happening here.
I'm still not sure if this is a Windows bug or maybe a bios bug. On some computers something has changed but I'm not sure what. Your computer has gone green and thinks everyone should be saving some power. :)
Tell me about it. :0
I did some more testing and found out that it must be some thermal issue because when I reverted to a smaller overclock (3.4GHz) with more Vcore, it throttled when the temperatures reached over 70 degrees.
I emailed ASRock about this but I doubt I'll get a response.
So would this more likely be a board issue than something with the CPU? If so, I can just buy another brand.
I don't believe this is a CPU issue. Try to find someone else running an X3440 on a different board and see if their CPU throttles down at idle or if the multiplier stays at 19 or 20. Check what OS they are using too. I think what you are seeing is a board / bios issue.
I've been playing around with an E8600 and have some issues here.
http://img258.imageshack.us/img258/4614/266x6.th.jpg http://img13.imageshack.us/img13/2650/400x10.th.jpg
The sensors don't seem to do a thing at idle :mad: Anything that can be done here to report idle temps?
You can buy a new system with a Core i7-900 series CPU. Those sensors work excellent at idle or at full load.
With a 45nm Core 2 CPU, you're screwed. Your sensors actually work better than most Core 2 CPUs and at least now you know their sticking point. These sensors were not designed to report accurate idle temperatures. For that purpose, they're all junk. Some are worse than others but none of them are worth a damn.
Yeah, that's what I thought. This is a build I'm working on for my son. He's about to turn 13, and his current AGP system is a bit out of date. I'm sure he's not going to check the temps once I turn it over to him. I just wanted to get everything checked out before I give it to him.
BTW, is the correct TJMax for the E8600 is 95c?
The Intel TMax spec is 100C for an E8600 but they also admit that is just a spec and actual TJMax can and does vary. They have never stated how much variation to expect or whether that variation is on the plus side the minus side or a bit of both.
SimpleTECH: Sounds like an interesting fix for throttling. Whatever works is a good thing.
That is because during the binning process, they marked and sort of burned that temp level at the cpu internal tjmax register at which the cpu started to error at that particular temperature level during the toasting process, and each cpu is unique. Of course they also have certain "acceptable" range in order to select the good one from the bad one.
So there is no real technical or business advantage to generalize the same tjmax for all cpu. Other wise they will throwing away a lot of cpu just to make the all cpu have the equal absolute tjmax
Absolute tjmax temp may differs say one cpu at 110 while another one could be at 90 even they're originated from the same wafer.
Wow thx, works great!
http://img710.imageshack.us/img710/8702/tempdu.png
Nice temps. Especially on Core1. :D
quick question out of curiosity. is the default tjmax reported by realtemp correct for my xeon x3440? i've noticed that realtemp correctly recgonizes my cpu, whereas coretemp thinks it's an i7. however, i'm not sure what my tjmax is supposed to be. not many people are running these xeons, so all i manage to find are numbers for the core2 quads and the i5/i7's.
Hi Uncle, thanks for all of your efforts refining and troubleshooting RealTemp. I purchased a QX9650 and shortly thereafter, found your utility.
Have been using and following it's development, which brings me to my post. I am having difficulty getting real temp to read the .5 multiplier in version 3.49.
JohnZ had the same problem in post #3917 and you were going to send him a "beta" too see if that fixed it. Didn't see if that fixed the issue, if so could you point me towards the beta, if not...?what can I do to help??
Vista Ultimatex64
Thanks for all of your efforts.
Here's the latest beta. Give it a try. :)
Hi burebista, thanks for the link. :0)
Fileden "seems" to be down right now??
Will try again later and let you know how the beta works.
For the QX9650 try adding this to the INI file:
MSRMulti=1
namurt: The QX is a special processor and the Core i7 method that works on most Core 2 CPUs has issues on that one so usually the older MSRMulti method works OK for that one.
The X3440 should have TJMax written into a register within each core of that CPU and RealTemp should have no problem reading that. The typical value is 99, but I think I've seen 92 or 93 on some of the ES processors and maybe a few that are 100. Each CPU is unique which is why software has to read the info from that register.
There is no guarantee that the info Intel wrote in that register is 100% accurate but that's all monitoring software can do is rely on that register.
thanks for the response, uncle! appreciate it. :)
realtemp is reporting a tjmax of 97C by default. (coretemp is the same.) i'm content with trusting this number as i never allow my distance to tjmax to get any closer than 25 anyway.
The1 - whenever there are problems with fileden, know that we also host th latest RealTemp here. ;)
Hi, you're going to hate me, but i would like to ask for some assistance with calibrating RealTemp.
I have a Q9550 (E0) running at 3.33GHz at 1.144V. With the Noctua NH-D14 it idles at 39-40C (ambient of 22C), which is quite disappointing. Though, I am aware that 45nm Wolfdale/Yorkfield chips often have stuck sensors.
Idle:
http://dl.dropbox.com/u/95820/Real%20Temp%203.40.jpg
Sensor Test:
http://dl.dropbox.com/u/95820/Window...%20%282%29.jpg
I'm not sure what to make of it, but i'd like to use the idle temps of the cooler running cores. Although I think i've seen you state that Core0 is generally more accurate, logically thinking it seems to be the wrong one.
I plotted the data in Excel, and came up with this:
http://dl.dropbox.com/u/95820/Untitled.png
Judging from that ^, it actually seems that Core1 doesn't move linearly, and that Core0 may have a different tjmax.
Any help would be greatly appreciated, thanks :)
Wondering if anyone can help on this issue. RMA'd my Eclipse+ in mid Decmber and opted to wait for a new one instead of a refurb. It arrived two weeks ago and I just rebuilt the machine last night.
The PCIex bus died on the first one but I had it at 4GHz (20x200) 1.416v CPU, 1.44v QPI, realtemp read usually in the mid 60s C. This new one's been happy at 3.9 (21x187), 1.3v CPU, 1.46 QPI, but RealTemp is now reading 77C! WTF?!? Can that be right?
I installed speedfan and it was reading 12 to 15 C lower on each core, so one of them is lying. I do realize they interpret the sensors differently but that is a HUGE margin. This all made me very paranoid, but P95 has been running with no errors for a couple hours now so it seems pretty stable. It's a Corsair H50 (exhausting) that I attached with AS5, with it's stock fan usually running at appr 1700 rpm under 100% load. The idle temp on the original Eclipse+ was around 32 with Realtemp, but this one idles at around 38 to 40. Is it just because it's a different board? Any ideas?
EDIT: Scratch that! It just crashed, aaaargh!
Also just noticed that it's throttling my multiplier back to 20x from 21x dropping me to 3.75gHz. So the system is noticing the excessive temps I guess.
unclewebb, I noticed that Real Temp 3.50 RC6 does work with my X3440 and Windows XP.
When I double-click it, it does not open. If I click on RealTempGT.exe it shows 3 cores.
I then downloaded version 3.40 off of TechPowerUp! and it works fine.
Just thought I'd let you know. :shrug:
http://i49.tinypic.com/2z9addc.png
yes they do like to have stuck sensores for some reason. i have 2 of them 0core and 2core. both wont go lower than 39c. and 3core wont go under 36c but once under stress they are just fine. core 1 is the only one that isnt stuck for me.hence the nice low of 26c. i love that..lol im getting the new HAF922 in the next day or 2 waiting on it to be shipped. hope that will drop a few degrees..
cjbrown80: RealTemp and Core Temp both follow the Intel directions for reading TJMax directly from the Core i7 CPUs. If SpeedFan does not do this correctly then it won't be able to report your temperatures correctly.
SimpleTECH: Thanks for that. That's not the first I've heard of that issue. RealTemp 3.50 always opened up fine for me so I'm not sure if there is a borked version running around in the wild or what's gong on. I'll plug my XP drive in and see if I can figure out any problems. I mostly use Vista or Windows 7 these days and haven't had any problems.
Wishmaker: Your testing might help me find the RealTemp hangs at start up issue.
cincyrob: Your version of RealTemp is so damn bright it's hurting my eyes. :D
M4rk: Intel's sensors on the 45nm Core 2 Quad chips don't start to become consistent until beyond 70C which is where Intel calibrates at least one of them. TJMax can vary significantly from one core to the next on these CPUs but you can't determine or take a wild guess at that until the CPU is at a much higher temperature than what you are running at.
There is so much error in the Core 2 45nm sensors caused by a variety of issues that accurate temperatures is mostly a guessing game. I tend to trust core 0 the most. Run your CPU as cool as possible and you'll be able to overclock it reliably as high as possible. Don't put too much effort into trying to get an exact temperature out of these crappy sensors. That's not what Intel designed them for and it shows.
No worries. Let me know what I have to do. Sorry for my absence, I've had a month of exams and I used my main desktop like a netbook : editing documents , slideshows and printing projects. Talk about having and overkill system for stuff like that :rofl::rofl:.
I have the RT problem again after fixing it two days ago :(. It seems that it randomly stops working. The process is shown in the taskmanager, 2nd process top to bottom but the program does not load. I have to end task, reboot, etc.
After you reboot does RealTemp appear normally?
This might be caused by the ATI code I added. You can try adding this to your RealTemp.ini file next time you have a problem.
NoATIGPU=1
I plan to make the ATI code optional in the near future. It samples my 5770 with no problems but for other combinations it's hit and miss. It might work OK when not in CrossFire but might choke when you are in CrossFire. I was hoping ATI would release a driver by now that would magically fix a few things. Did you update to 10.1 recently or are you using an older driver?
Uncle,I saw Real Temp 3.50 and i5-660,didn't match CPU speed and multi.
1.http://www.xtremesystems.org/forums/...&postcount=101
2.http://www.xtremesystems.org/forums/...2&postcount=21
Any particular reason?Thanks.
Thanks, that fixed this PIA issue I had. :up:
I would end up with multiple RealTemp.exe's in the Task Manager but not see it running (I clicked it multiple times).
I am on Vista 64 SP2 and have a Nvidia GPU and RT 3.50.
EDIT : Just to let you know as it was totally random, I did not need reboot to run it after adding that line to the " .ini file ", I simply edited then saved and ran it and it opened 1st time.
:)
Thanks uncle. Given that the 45nm C2Q sensors are crap, I decided to just calibrate my idle temps, regardless if it's the right thing to do.
Also, I just noticed this:
http://dl.dropbox.com/u/95820/realtemp_fsb_bug.jpg
My FSB is at 390MHz, as reported my CPU-z. New bug? :D
Most of the screen shots are taken at idle. The actual multiplier at idle can be influenced by how the Power Options -> Minimum processor state is set as well as C1E and SpeedStep. I don't know how his computer is set up so it's hard to comment.
In the second post with 4 pictures, CPU-Z and RealTemp are in agreement on 3 out of the 4 pictures. For the first picture, RealTemp is reading a lower multiplier at idle which might be a sign that he has C1E enabled.
In the first post with RealTemp reporting a 25 multiplier, I'm not too sure. RealTemp uses internal timers which can accurately determine the multiplier but unfortunately they are not a protected resource so any monitoring program can change or reset them. I haven't kept track of what program is using or abusing these timers so I'm not sure if maybe one of the other monitoring programs he was using might be using these timers inappropriately.
When RealTemp is running by itself, I haven't seen it get the multiplier wrong on too many Core i7/i5/i3 CPUs.
As for Core 2 CPUs like M4rk, that's another story. :)
I was helping JohnZS with his QX CPU a couple of months ago and accidentally released a "test" version designed for him into the main beta download area. That one off version got hosted everywhere so I said F it. I got side tracked with other projects at that time so left it and since there have been very few complaints, I haven't been motivated enough to fix it. Between that bug and the ATI code blowing up randomly on some computers while working fine on others, I just kind of lost interest in project RealTemp.
M4rk: Try downloading the last official version, RealTemp 3.40, and see if that gets your multiplier right. If so I'll send you an updated version tomorrow.
Not only on I5 Stasio on my Xeon X3380 the same...also Realtemp, Coretemp shows 5C more than Everest, Speedfan, HWinfo and so others :shrug:
you can see the temp issue here
http://img215.imageshack.us/img215/5...951125cust.jpg
Here's the formula that all software uses:
Reported Temperature = TJMax - Digital Thermal Sensor reading
All software reads the same data from the same on chip digital thermal sensors. The disagreement is in TJMax. RealTemp and Core Temp assume that value is 100C and other software assumes that value is 95C so RT and CT will always report a temperature 5C higher than the other programs.
Which value is correct? The truth is that there is no documented correct value. All software is guessing.
The only documentation ever published by Intel calls this value TJ target and they've said that this is not a fixed value. It can and does vary from one CPU to the next with the same model number and in my testing it also varies from core to core on the same CPU. Your core 1 reading much lower than the other 3 is a good indication of that.
These sensors are designed for thermal throttling and thermal shut down control and are accurate enough for those purposes. For any other purpose like accurate temperature reporting, the 45nm Core 2 Duo/Quad sensors are crap with a long list of known issues. 100% accurate temperatures from these undocumented sensors from idle to TJMax is simply not possible.
If you want RealTemp and Core Temp to low ball your temperature readings like the other programs are doing then set TJMax to 95 instead of 100.
Eh, I pref the cleanliness of CoreTemp as I get the exact same temp readings from both utils. The extra info in RealTemp is nice but I just don't care for the extra info for my under-volted E7600.
unclewebb, just for your info, Real Temp 3.50 won't run if I don't put the NoATIGPU=1 into the RealTemp.ini like you suggested. I have a Sapphire Radeon 4850X2 on Win7 64-bit with the 10.1 drivers, and I am running in CrossFire. Let me know if you need more info.
Uncle Webb,
FYI... I came across this bug in RealTemp 3.50. This is on my wife's older 975X chipset mobo running Windows XP Pro. It does not report the fsb/multi like the other two versions.
quick question, is 105c the correct tjmax for the 32nm i3-530?
It's written in MSR (no more guess) so it should be correct. :)
ok thanks.
Hi Uncle, ive been using Realtemp for over a year now. Ive upgraded to the latest beta and am getting this issue:
Realtemp crash on open
Event ID 1000
Faulting application name: RealTemp.exe, version: 3.5.0.0, time stamp: 0x4b25596b
Faulting module name: ntdll.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdb3b
Exception code: 0xc0000005
Fault offset: 0x0002de64
Faulting process id: 0x12c0
Faulting application start time: 0x01caab602ffe1283
Faulting application path: D:\Jasjeet\Downloads\OC\RealTemp_350\RealTemp.exe
Faulting module path: C:\Windows\SysWOW64\ntdll.dll
Report Id: 6dc56147-1753-11df-8efd-00221517d003
Windows 7 x64, Q6600 3.2ghz, 4gb Ram, ATi 4850 (maybe issue getting the GPU temp)
Prime95 for 2hrs passed today to confirm CPU stable
Memtest 200% pass today
Older versions run fine. Though i cant rule out an OS file system problem, but its only Realtemp causing an issue. If i log off, then log on, it works OK. But if i reboot/shutdown then on boot up it will not launch even with Run as Admin. I have to log off, log on for it to work. Some kind of Ram writing/permission issue i think.
If this is a problem i hope you can fix it =)
Thanks
I have been getting crashes lately too with 3.50 RC6. It worked fine for months and all of a sudden I'm getting this:
http://i176.photobucket.com/albums/w...temp_crash.png
It works fine if I run any compatibility mode. Oddly enough it started crashing about the same time as my .NET runtime started going crazy and crashing any .NET program that tries to use Windows Forms (even in compatibility mode). But RT doesn't use .NET...