OK, I get that. But what I've taken from the discussion up to a few pages ago was that there is an unknown fudge around TjMax at the top end, where the sensors are supposed to be more accurate, then there is a bad slope with increasing error down to the low end where the sensors are definitely not accurate, giving extremely unreliable readings. But the last few pages seem to have provided a clear way to calibrate this low end very accurately, which should be possible by confirming the underclock state and getting the user to input cooling type and ambient temp, so at least one end of the graph, which previously was the worst end, can now be more or less dead accurate. Am I wrong here?
Calibrating the high end is going to be impossible without some form of active temp measurement, I agree, along with a routine that can stress the processor until it throttles at a known distance from TjMax. But at least that end is supposed to be more accurate. If the DTS is officially based on TjMax, now published, does this new figure of TjTarget even matter? Ironically it seems this relationship to TjMax is being lost in the pursuit of this new obfuscation from Intel...
Here's a thought - given rge's testing of the gradients from core to IHS to heatsink, suppose someone were to market a cheap piece of kit - effectively a really BAD (or modded cheap) heatsink with a thermosensor mounted optimally in the base, linked to a simple thermometer gizmo that records the max temp measured. Now you have a stressing routine that breaks at PROCHOT asserted, you have a known DTS measurement this occurs at, and a known IHS temp this occurs at on a given chip. Wouldn't this fix the high end of the graph effectively without requiring ANY of these speculative and unreliable documented temp specifications, INCLUDING TjMax?
For any really avid overclocker (looks around at a captive and hungry market), this should be fairly simple to use to calibrate a given chip on a once-only basis, is obviously capable of reuse on newer sockets/processors with slight software upgrade and simple mounts (heck, make it compatible with the TRUE and 90% of the users here wouldn't even need any
) and it should be dirt cheap to make and still be pretty accurate for this purpose. Given what some people here are prepared to waste, um "invest", on graphics cards and SSDs, I can see a definite opportunity for someone with an engineering interest...
EDIT: Just wanted to add that the current method of calibration at the low end (underclock and idle) works without hardware, but isn't very useful for "normal" use as that point on the graph is way below "normal" usage, which is rarely going to be massively underclocked and idle for most people here. If you had such hardware, it would be simple to have a piece of software that confirmed system idle at normal clocks and got a DTS reading/temp for that, then did the same for a 50-60% system load, then did the full stress routine to throttling. You'd then have three confirmed points on the graph to make a really accurate DTS/temp curve for each core covering the likely usage range, which might be importable into RealTemp somehow...
![]()





), this should be fairly simple to use to calibrate a given chip on a once-only basis, is obviously capable of reuse on newer sockets/processors with slight software upgrade and simple mounts (heck, make it compatible with the TRUE and 90% of the users here wouldn't even need any
) and it should be dirt cheap to make and still be pretty accurate for this purpose. Given what some people here are prepared to waste, um "invest", on graphics cards and SSDs, I can see a definite opportunity for someone with an engineering interest...
Reply With Quote

Bookmarks