RealTemp GT 3.66
http://www.mediafire.com/?ns1g4y2hinoa977
Thanks for your test results WFO. Post a screen shot if this version of RealTemp GT reads all 6 cores of the new Sandy Bridge E series.
Printable View
RealTemp GT 3.66
http://www.mediafire.com/?ns1g4y2hinoa977
Thanks for your test results WFO. Post a screen shot if this version of RealTemp GT reads all 6 cores of the new Sandy Bridge E series.
RT GT 3.66 is reading 6 cores! :yepp: I can't believe you worked on this Thanksgiving Day. Thank you. Thank you. Thank you. You went above and beyond! :up:
http://img5.imageshack.us/img5/3025/realtempgt166.png
Uploaded with ImageShack.us
You're welcome and thanks for the feedback. I figured that Xtreme users knowing their correct core temperature was more important than watching football all day.Quote:
I can't believe you worked on this Thanksgiving Day.
Here's an interesting comparison on the [H]ard OCP Forum.
http://hardforum.com/showpost.php?p=...7&postcount=21
Even though RealTemp reads the exact same temperature sensors as the competition does, check out how much better it is at recording the highest maximum temperature. RealTemp was able to report a maximum core temperature 15C and 18C higher than Core Temp during the user's test. RealTemp must know a few tricks about recording the maximum core temperature of a fully loaded processor compared to the competition. :)
A good programms. The questions of author. Do you plan to make the programm a widget?
Real Temp 3.70
January 16, 2012
http://www.techpowerup.com/downloads...Temp_3.70.html
Quote:
Originally Posted by changelog
Where I can find a description of the options on the "settings " window? Specifically, what is the "clock modulation"?
In the Pentium 4 and Core 2 era, clock modulation throttling was a method to slow a CPU down internally. For every 8 clock pulses, you could adjust clock modulation from 1 to 8 so the CPU would only work this many times. If this was set to 4, your CPU would only work 4 out of every 8 cycles so it would accomplish half as much but it would also consume a lot less power and produce a lot less heat. For Intel's latest CPUs, this works a little differently. Instead of slowing the CPU down internally by skipping clock pulses, it simply lowers the maximum multiplier which slows the CPU down externally. When the multiplier can not go any lower, then it resorts to the skipping clock pulses like it originally used to do.
If you want to play around with this for whatever reason you might want to check out ThrottleStop instead. ThrottleStop lets you play with 2 different types of clock modulation throttling and has a nice monitoring panel to show you what your CPU is up to.
http://www.techpowerup.com/downloads...Stop_4.00.html
Many different laptops sneak in one type of throttling or the other to control heat output and to fool consumers into thinking that their CPU is running at its rated speed when often times it isn't.
Here is some Intel documentation about Clock Modulation Throttling.
http://www.intel.com/cd/ids/develope...118.htm?page=4
Realtemp (v3.67?) stopped working properly and I couldn't see the sys tray icons for GPU & CPU. Checked the program directory and found the log file was 184mb. Even after deleting the log file for some reason it still wouldnt work right or show the icons in sys tray. DL'd v3.70 and all is fine.
Is there some way we can set limits to log file, ie, after 20mb to stop logging?
Thanks.
At the moment there is no feature to prevent the log file from growing to 184 MB and beyond. Maybe in the future I will add an option that appends the day or the month to the log file so individual log files don't get too big. I will put that on the things to maybe do list.
If you are using Windows 7 and have problems with icons not showing up correctly in the system tray then try using the Windows 7 show all icons feature. RealTemp is not compatible with the Windows 7 hide the system tray icons feature.
Hi Unclewebb,
Realtemp has always worked flawlessly with me for as long as I remember, tray icons and all. It was just that moment when the logfile grew too big that the program wouldnt run when clicked on (although it was showing in task manager). Each time I clicked on it to show the interface, it would just add another instance of itself to task mgr rather than opening. But killing the app and deleting its directory and restarting anew (with v3.70) fixed everything. It was just unusual that in last 2 years or so of using Realtemp that this issue occured. Maybe the logfile took 2 years to build (updating to newer versions I would just merge them over old directory, so maybe the logfile remained in place?).
Still bottom line, I have full faith in the reliability of RT. This app has saved my hardware a few times, notably with the bugged Nvidia driver that messed up GPU fan speeds a couple years ago. Much appreciated!
One of the programs I have on my usb drive for fresh installs, amazing program.
Thank you to give us RT GT 3.70. Has RT now command line Switches to enable / disable C-States and Turbo?
Whats the easiest way to make the program start on launch?
windows autostart ?
Cumulonimbus: I don't ever use the command line so I don't plan to add any command line switches to RealTemp.
nksharp: For Windows autostart, I use the Task Scheduler.
http://forum.notebookreview.com/hard...ml#post6865107
If you are an Administrator on your account and you are not using UAC then try dragging a link to RealTemp into your Windows Startup folder.
Hi, need some help here. I just bought a used Q9450 (rest of the system is in sig) which I installed yesterday and very much intend to overclock. Upon installing the CPU I first cleaned it and also my HSF with Arcticlean, or whatever it's called, and put some AS5 on the CPU. I set all voltages in BIOS to minimum value, except the vcore and multi which I left on auto. I also set speedstep and the other CPU features to enabled, and set the FSB to 333. When I got into Windows I started Real Temp and noticed my temps were really weird, which they never were with my old 65nm E6600. I ran IBT and it seemed stable, so I started reading a bit about the C2Q sensor problems and got here. So I ran the sensor test in RT next, with the settings described above:
http://img221.imageshack.us/img221/6...ltageinbin.jpg
At this point I thought maybe something was wrong with my installation so I turned the case on the side, no difference. I then removed the HSF, cleaned it and the CPU, and then put some NT-H1 on it. At this point ran the sensor test again, using the same settings as before and here are the results:
http://img819.imageshack.us/img819/8...oltageinbi.jpg
And nothing. Rebooted into BIOS, set vcore to 1.3875V (stock seems to be 1.2375V according to RT) and booted into Windows. Ran the sensor test a third time and guess what? This:
http://img43.imageshack.us/img43/312...vinbiosnth.jpg
THE F:banana::banana::banana: is going on? What readings am I supposed to trust here, none of them? :confused:
Will Sandy Bride-E support be coming soon to RealTemp?
The temperature sensors that Intel used on there 45nm Core 2 processors were junk. They get stuck at lower temperatures and have a pile of error when trying to report full load temperatures. Intel was always a little sheepish about just how bad they were. I have never seen any official specs for how much error they have but I know from experience that it is significant.
You are correct. On some of these 45nm CPUs, you can't trust any of these sensors. Intel only intended these sensors to be used for thermal throttling and thermal shutdown control and for those two purposes, these crappy sensors are more than good enough. Intel spent a few more pennies on the Core i sensors but they are still only intended to be used for thermal control. None of them were ever intended to be used for 100% accurate temperature reporting.
As far as I know, RealTemp should work on the Ivy CPUs too.
So in my case, you don't think I can trust any of the readings, right? I mean considering that I raised vcore from 1.2375 to 1.3875 and the temp hardly rose on any of the cores, with the exception of core 1 which rose 8 degrees. I would think that the temp should rise alot more then that considering the huge increase in voltage? Not that it seems to matter because I can't seem to get the damn thing stable at even 3.3 GHz...
Core 0, the core on the far left of RealTemp is probably the best of a bad bunch but there is no way to tell for sure. The 2 cores on the left appear to have some significant slope error where they change at a different rate than the temperature changes and core 1 has some issues with getting stuck at lower temperatures.
If you can't get one of these CPUs stable at 3.3 GHz then you probably have other problems besides core temperature. Run CPU-Z while Prime95 is running and see what is reported for actual voltage. Some boards are not great when it comes to overclocking Core 2 Quads and crap out when the bus speed goes over 400 MHz. I have an old Asus board that can run a Core 2 Duo reliably at a bus speed of over 500 MHz but falls on its face when trying to overclock a similar Core 2 Quad. I ended up buying a QX9650 for this board so I could overclock by adjusting the multiplier higher which allowed me to keep the bus speed at 333 MHz for better stability.
I just lapped the CPU, which seemed to lower the temps a bit, but again it's hard to tell since the sensors don't seem to work. Testing 3.3 GHz at the moment, using 413 x 8. Several days ago I had it running at 458 x 7 so it doesn't appear to be the board that's holding me back. Using a P5E Deluxe btw. Vdroop seems to be the biggest problem atm. Setting a voltage of 1.3875 in BIOS which I'm using for 3.3 GHz currently, let's me idle @ 1.368V according to CPU-Z. While running IBT it drops down to 1.304V however, with LLC disabled that is. And I would prefer not using LLC since it causes my CPU to degrade even more. And since Intel's spec says max voltage shouldn't be more than 1.3625V I'm hesitant to raise it more, since I'm already slightly above that at idle. So sure, I probably could go higher if I wanted to, unless temps become a problem, which I have no idea of knowing.
Dear, unclewebb. Couldn`t you write Real Temp Mac OS version? Is it real?
How is the 22nm support on the latest Real Temp, i know that it reads the temperature and everything appears to be OK, but myself and many others are quite surprised in terms of the temperatures reported. I've tested it with 2 3770K's and a 3570K. On the 3770K's i'm seeing close to 80C with just 1.35v, i'm using chilled water so the water temp is 10c under load, in other words i have a 70C Delta T between the reported temps and the water block at just 1.35v. On the 3570K i got a load temp that peaked @ 68C (62.5C on average between the 4 cores) with 1.4v, with 2600K with HT off i got an average or 37.75C with the same voltage. Is it really possible that the 22nm chips are 25C hotter on average between all 4 cores compared to the 32nm chips. See my test results under:
Load with Prime95
Attachment 125536
@ 1.1V IB runs 4,25C hotter then SB with HT off. (16.03%)
@ 1.2V IB runs 10,75C hotter then SB with HT off. (36.44%)
@ 1.3V IB runs 15,00C hotter then SB with HT off. (44.11%)
@ 1.4V IB runs 24,75C hotter then SB with HT off. (65.56%)
Also the TJ Maxx is reported as 110C, where is this taken from, and could it be that the programs are reading wrong from the sensors considering that this are all new chips that is not yet released? (Getting the same temps in Core Temp btw)
Thanks.