FZ1
02-10-2006, 07:29 AM
I have something interesting happening...I was running for weeks at the settings below:
FSB - 317
LDT - x3
CPU/FSB Ratio - x9
Vcore - 1.450 x 110%
LDTv - 1.30
CHPv - 1.6
vDIMM - 2.6
DRAM Frequency Set - 166 (DRAM/FSB:5/06)
Command Per Clock (CPC) - Enable
CAS Latency Control (Tcl) - 3.0
RAS# to CAS# delay (Trcd) - 04 Bus Clocks
Min RAS# active time (Tras) - 08 Bus Clocks
Row precharge time (Trp) - 04 Bus Clocks
Row Cycle time (Trc) - 07 Bus Clocks
Row refresh cyc time (Trfc) - 14 Bus Clocks
Row to Row delay (Trrd) - 03 Bus Clocks
Write recovery time (Twr) - 02 Bus Clocks
Write to Read delay (Twtr) - 02 Bus Clocks
Read to Write delay (Trwt) - 03 Bus Clocks
Refresh Period (Tref) - 3120 Cycles
Write CAS Latency (Twcl) - 01
DRAM Bank Interleave - Enabled
DQS Skew Control - Auto
DQS Skew Value - 0
DRAM Drive Strength - Level 7
DRAM Data Drive Strength - Level 3
Max Async Latency - 8.0ns
DRAM Response Time - Normal
Read Preamble Time - 5.0ns
IdleCycle Limit - 256 Cycles
Dynamic Counter - Disable
R/W Queue Bypass - 16 x
Bypass Max - 07 x
32 Byte Granularity - Disable(8 Bursts)
Then, I received a new 170 CPU (different stepping) so I popped it in (cleared CMOS etc) and did some OC testing and benching. Removed the new chip and replaced the one I had in before. Cleared CMOS etc and I had issues booting to Windows at the same settings (above) that I had been running. I was having a lot of trouble and long story short, I re-flashed the BIOS. After that, I could boot to windows but only running a bigger divider. After looking at A64 tweaker settings, I ended up figuring out that my Max Async Latency & Read Preamble Time had to be raised.
Max Async Latency - 8.0ns -> 10ns
Read Preamble Time - 5.0ns -> 6.5ns
Any idea why this changed? Seems very weird. I was even abel to tighten some other timings but these won't go any lower without causing issues. Looking at other user timings HERE (http://www.dfi-street.com/forum/showthread.php?t=41953) everyone is running these timings lower and some even at a higher bandwidth.
I'm still running @ 259 which is great but it bothers me that I can't figure out a possible root cause.
FSB - 317
LDT - x3
CPU/FSB Ratio - x9
Vcore - 1.450 x 110%
LDTv - 1.30
CHPv - 1.6
vDIMM - 2.6
DRAM Frequency Set - 166 (DRAM/FSB:5/06)
Command Per Clock (CPC) - Enable
CAS Latency Control (Tcl) - 3.0
RAS# to CAS# delay (Trcd) - 04 Bus Clocks
Min RAS# active time (Tras) - 08 Bus Clocks
Row precharge time (Trp) - 04 Bus Clocks
Row Cycle time (Trc) - 07 Bus Clocks
Row refresh cyc time (Trfc) - 14 Bus Clocks
Row to Row delay (Trrd) - 03 Bus Clocks
Write recovery time (Twr) - 02 Bus Clocks
Write to Read delay (Twtr) - 02 Bus Clocks
Read to Write delay (Trwt) - 03 Bus Clocks
Refresh Period (Tref) - 3120 Cycles
Write CAS Latency (Twcl) - 01
DRAM Bank Interleave - Enabled
DQS Skew Control - Auto
DQS Skew Value - 0
DRAM Drive Strength - Level 7
DRAM Data Drive Strength - Level 3
Max Async Latency - 8.0ns
DRAM Response Time - Normal
Read Preamble Time - 5.0ns
IdleCycle Limit - 256 Cycles
Dynamic Counter - Disable
R/W Queue Bypass - 16 x
Bypass Max - 07 x
32 Byte Granularity - Disable(8 Bursts)
Then, I received a new 170 CPU (different stepping) so I popped it in (cleared CMOS etc) and did some OC testing and benching. Removed the new chip and replaced the one I had in before. Cleared CMOS etc and I had issues booting to Windows at the same settings (above) that I had been running. I was having a lot of trouble and long story short, I re-flashed the BIOS. After that, I could boot to windows but only running a bigger divider. After looking at A64 tweaker settings, I ended up figuring out that my Max Async Latency & Read Preamble Time had to be raised.
Max Async Latency - 8.0ns -> 10ns
Read Preamble Time - 5.0ns -> 6.5ns
Any idea why this changed? Seems very weird. I was even abel to tighten some other timings but these won't go any lower without causing issues. Looking at other user timings HERE (http://www.dfi-street.com/forum/showthread.php?t=41953) everyone is running these timings lower and some even at a higher bandwidth.
I'm still running @ 259 which is great but it bothers me that I can't figure out a possible root cause.