Windows 7 X64/ Phenom X6 1090T@4.1/ Crosshair IV Formula 890FX/ Saphire Toxic 5850/ Patriot Torqx SSD 2x 64GB RAID 0, Seagate 1TB storage/ Corsair TX750W/ Haf 932/ DDR3- Muskin
If your post is non-technical in nature and I do not reply most likely you have joined other trolls in my ignore list.
mav my result with Ripjaws is here:
http://www.xtremesystems.org/forums/...9&postcount=62 Think, its not bad
AMD FX-8350@5 GHz 1.52V + H100 Corsair, Crosshair V Formula-Z, 2x 4GB GSKill TridentX 2400 MHz+ 10-12-12-25, EAH HD6870 DirectCU, Corsair TX850W
CPUs:next 2x AMD FX-8350, AMD FX-8320, AMD FX-6300, AMD FX-4300, AMD FX-8150, AMD X2 555 BE, AMD x4 965 BE C2 and C3, AMD X4 970 BE, AMD x4 975 BE, AMD x4 980 BE, AMD X6 1090T BE, AMD X6 1100T BE, AMD FX 8120 95W, AMD FX 8120 125W, next AMD FX-8150, AMD A10-6800K, 2x AMD A10-5800, AMD A10-5600K, AMD A8-3850, AMD A8-3870K, 2x AMD A64 3000+, AMD 64+ X2 4600+ EE, Intel i7-980X, Intel i7-2600K, 2x Intel i7-3770K, Intel i7-3930KAMD Cinebench R10 challenge
Would pi series be good or rip jaws x
I want SLI support (through BIOS) even for my M4A89TD PRO/USB3... I am sure it can support it :|.
Puni's _.->*NEW?*<-._ PRIME STABLE OC - CLICK HERE: Cooler Master HAF X NVIDIA EDITION Asus M4A89TD PRO/USB3 + AMD FX-8350 @210x21=4,42Ghz blue voltages (NB/HT @2320Mhz/2730Mhz) + Asus/Zotac GTX580 SLI @851-1702/4204Mhz 1,063V + 8GB of Samsung 30nm DDR3 @1966Mhz 9-9-9-27 1,5V | Caviar Black 3TB SATA III RAID0 | Samsung 2032BW | Win7x64 | 3DMark11 - CLICK HERE | 3DMark Vantage - CLICK HERE
It can, but you wont get it.
Use hacked drivers.
@Raja - Are there any plans to look into reducing the CHV boot times please? As I believe that was supposed to be one of the biggest benefits (other than a GUI) of UEFI.
If there were - I'd buy this board in a heartbeat! - thanks.
2700K@5Ghz | Asrock Z68 X4 G3 | 16Gb Vengeance | C300 | ATI 7970 | TX-850W | 650D | Custom W/C
Before we start, check out JJ’s videos covering various aspects of the ASUS Crosshair V Formula and the ASUS 9** motherboard series:
Same drill as our Intel P67/Z68 guide, we’re going to list the important overclocking related BIOS functions and provide a concise descriptive text next to each to de-mystify their meaning and conditions of use.
Upon entering UEFI BIOS, you’ll need to head over to the AI Tweaker section to embark on any overclocking:
AMD’s 9 series chipset gets UEFI BIOS
Load Extreme OC Profile: Contains pre-set BIOS profiles to help overclock the system if you don’t wish to experiment with settings yourself. The profiles used are general fine for normal use, though may set certain voltages a little higher than absolutely necessary simply to ensure that the profiles work with a wide range of processors and memory. If you don’t plan on spending copious level of time in the BIOS fiddling with various parameters, then these profiles will help you overclock the system with minimal fuss.
OC Tuner: Our automated routine that overclocks a system based upon cooling and components. When selected the system will run a series of tests during system boot (do not be alarmed if your system reboots a few times after selecting this setting and saving and exiting BIOS – that’s normal). After the procedure is complete, you may wish to enter the operating system and run your preferred stability tests to confirm stable operation. Do bear in mind though that for compatibility purposes OC Tuner will use memory module JEDEC timing values.
CPU Ratio: Sets the CPU core multiplier ratio, which is multiplied by the “CPU Bus/PEG Frequency” setting to obtain CPU core frequency. The current and target processor speeds are shown in the top-left of the AI Tweaker menu.
AMD Turbo Core Technology: Sets the Turbo multiplier ratio. Turbo Core Technology allows the processor to ramp its core frequency to a higher level during software loading provided that the thermal design power ratings are not breached (by default).
This setting can be used to over-ride the default Turbo Core multiplier ratio (TDP permitting). The multiplier values available for this function are dependent upon the processor used. If using a manual setting that is higher than the default Turbo Ratio, care must be taken to ensure that processor core voltage is adequate enough to sustain Turbo Core frequency speeds. *To be Updated at a later date *
CPU Bus/PEG Frequency: Sets the reference clock frequency from which the processor, memory, memory controller and the HT bus are derived. Adjusting this value allows granular control over these bus frequencies which can then be offset using multiplier control of each corresponding bus to ensure that the functional limitations of a bus are not breached while overclocking the system.
PCIe Frequency: Directly sets the PCIe bus operating frequency. Manipulation of this setting is not required for most overclocking.
Memory Frequency: Sets the memory bus multiplier ratio and is used in conjunction with the value entered in the CPU/Peg Frequency setting to obtain the memory operating frequency. The range of available multipliers ratios depends upon the processor used while usage is dependant on the operational limitations of the processor and memory modules.
CPU/NB Frequency: Sets the multiplier ratio of the integrated memory controller (on the processor die) and is used in conjunction with the value entered in the CPU/Peg Frequency setting to obtain the memory controller operating frequency. The range of available multipliers ratios depends upon the processor used while usage is dependent on the operational limitations of the processor.
HT Link Speed: Sets the multiplier ratio of the HT bus and is used in conjunction with the value entered in the CPU/Peg Frequency setting to obtain HT link operating frequency. The range of available multipliers ratios depends upon the processor used (and which CPU/NB Frequency ratio is used) while usage is dependent on the operational limitations of the processor and motherboard.
CPU Spread Spectrum: Modulates the processor clock to reduce radiated noise emissions – disable for overclocking as clock modulation reduces logic sampling margins.
PCIe Spread Spectrum: Modulates the PCIe clock to reduce radiated noise emissions – disable for overclocking as clock modulation reduces logic sampling margins.
EPU Power Saving Mode: Sets the load dependant phase switching conditions; the VRM is made up of multiple phases (each phase has at least two FETs). During light-load conditions FETs can be switched off to save power – setting EPU to Enabled allows this to happen. If EPU is disabled all phases will remain on, regardless of system loading.
DRAM Timing Control: Takes us to the DRAM timing sub-menu, where primary, secondary and tertiary memory timings can be set.
These timings will automatically be offset according to memory module SPD and memory frequency. Should you wish to experiment with various timings, the primary settings are the most important for overall memory performance. Most timings are set in DRAM clock cycles, hence a lower value results in a more aggressive setting (unless otherwise stated).
As always, performance increases from memory tuning are marginal and are generally only noticeable during synthetic benchmarks. Either way, voltage adjustments to VDIMM, CPU/NB voltage and to a lesser extent CPU Core Voltage may be necessary to facilitate tighter timings.
DRAM CAS Latency: Column Address Strobe defines the time it takes for data to be ready for burst after a read command is issued. As CAS factors in every read transaction, it is the most important timing in relation to memory read performance.
To calculate the actual time period denoted by the number of clock cycles set for CAS we can use the following formula:
tCAS in Nano seconds=(CAS*2000)/Memory Frequency
This same formula can be applied to all memory timings that are set in DRAM clock cycles.
DRAM RAS TO CAS Latency: Also known as tRCD. Defines the time it takes to complete a row access after an activate command is issued to a rank of memory. This timing is of secondary importance behind CAS as memory is divided into rows and columns (each row contains 1024 column addresses). Once a row has been accessed, multiple CAS requests can be sent to the row the read or write data. While a row is “open” it is referred to as an open page. Up to eight pages can be open at any one time on a rank (a rank is one side of a memory module) of memory.
DRAM RAS# PRE Time: Also known as tRP. Defines the number of DRAM clock cycles it takes to precharge a row after a page close command is issued in preparation for the next row access to the same physical bank. As multiple pages can be open on a rank before a page close command is issued the impact of tRP towards memory performance is not as prevalent as CAS or tRCD - although the impact does increase if multiple page open and close requests are sent to the same memory IC and to a lesser extent rank (there are 8 physical ICs per rank and only one page can be open per IC at a time, making up the total of 8 open pages per rank simultaneously).
DRAM RAS Active Time: Also known as tRAS. This setting defines the number of DRAM cycles that elapse before a precharge command can be issued. The minimum clock cycles tRAS should be set to is the sum of CAS+tRCD+tRTP.
DRAM READ to PRE Time: Also known as tRTP. Specifies the spacing between the issuing of a read command and tRP (precharge) when a read is followed by a page close request. The minimum possible spacing is limited by DDR3 burst length which is 4 DRAM clocks. Most 2GB memory modules will operate fine with a setting of 4~6 clocks up to speeds of DDR3-1866 (depending upon the number of DIMMs used in tandem). High performance 4GB DIMMs (DDR3-2000+) can handle a setting of 5 clocks provided you are running 8GB of memory in total and that the processor memory controller is capable. If running more than 8GB expect to relax tRTP as memory frequency is increased.
DRAM RAS to RAS Delay: Also known as tRRD (activate to activate delay). Specifies the number of DRAM clock cycles between consecutive Activate (ACT) commands to different banks of memory on the same physical rank. The minimum spacing allowed at the chipset level is 4 DRAM clocks. A setting of 5 clocks and upwards may be necessary to achieve stability at speeds over DDR3-1866.
DRAM Write to Read Delay: Also known as tWTR. Sets the number of DRAM clocks to wait before issuing a read command after a write command. The minimum spacing is 4 clocks. As with tRTP this value may need to be increased according to memory density and memory frequency.
DRAM CAS Write Latency: Also known as CWL. Sets the column write latency timing for write operations to DRAM. For absolute stability the minimum value should be set equal to read CAS, as the timing constraints of accessing a column are the same, although there are some modules that can handle a setting of Read CAS -1 or Read CAS -2 depending upon memory frequency . This timing is just as important as read CAS because data has to be written to DIMMs in order to be read.
DRAM Write Recovery Time: Defines the number of clock cycles that must elapse between a memory write operation and a Precharge command. Most DRAM configurations will operate with a setting of 10 clocks up to DDR3-1866. After that, relaxing to 12+ clocks may be necessary at DDR3-2000+.
DRAM Ref Cycle Time: Also known as tRFC. Specifies the number of DRAM clocks that must elapse before a command can be issued to the DIMMs after a DRAM cell refresh.
DRAM Row Cycle Time: Also known as tRC. Stipulates the number of DRAM clocks that must elapse before another Activate Command (row select) to the same bank. The minimum spacing is tRAS+tRP. Setting a higher value may aid stability somewhat at the chance of a very small performance hit.
DRAM READ to WRITE Delay: Sets the read to write delay timing where the write follows a read on the same rank. A setting of 4 clocks should suffice for most configurations, although some DIMMs may need a higher setting to aid stability as memory frequency is increased past DDR3-1866 at the expense of performance.
DRAM WRITE to READ Delay (DD): Sets the delay period between a write command that is followed by a read command; where the read command requires the access of data from a different DIMM. A value of 1 clock is possible on high performance memory. For higher density modules this value may need relaxing to 2~4 clocks as memory frequency is increased.
DRAM WRITE to WRITE Delay: Sets the delay between two consecutive write commands. The BIOS does not stipulate if this is a different rank or different DIMM timing. A setting of 4 clocks works with most configurations, but may need relaxing to 5~7 clocks if 16GB memory configurations are used or if 8GB configurations are used at speeds in excess of DDR3-2000.
DRAM READ to READ Delay: Sets the delay between two consecutive read commands. The BIOS does not stipulate if this is a different rank or different DIMM timing. A setting of 4 clocks works with most configurations, but may need relaxing to 5~7 clocks if 16GB memory configurations are used or if 8GB configurations are used at speeds in excess of DDR3-2000.
DRAM Refresh Rate: Also known as tREFI. Sets the delay period before a DRAM refresh command is issued to all ranks. A higher number is more aggressive as it sets a longer delay period between refresh commands. 4GB configurations should operate fine with a setting of 7.8us, 8 and 16GB configurations may need a setting of 3.9us if overclocking past DDR3-2000 and DDR3-1600 respectively.
DRAM Command Rate: Specifies the number of DRAM clock cycles that elapse between issuing commands to the DIMMs after a chip select. The impact of Command Rate on performance can vary. For example, if most of the data requested by the CPU is in the same row, the impact of Command Rate becomes negligible. If however the banks in a rank have no open pages, and multiple banks need to be opened on that rank or across ranks, the impact of Command Rate increases.
Most DRAM module densities will operate fine with a 1N Command Rate. Memory modules containing older DRAM IC types may however need a 2N Command Rate.
DRAM Driving Control: Takes us to the Drive Strength sub-menu (shown later in the guide).
GPU.DIMM Post: opens a sub-page which shows how many GPUs are inserted into the PCIe slots and how many DIMMs are being used. If there are any peripheral issues or system bus speeds have been increased too far, one of the GPUs or memory modules may not show in the GPU DIMM Post screen – a quick glance here can save a lot of head-scratching. On this platform however, checking the Performance Tab of Task Manager to check if all DIMMs are mapped is also advised when pushing DRAM past DDR3-2000.
Digi+ VRM/POWER Control: Takes us to the power control sub-menu (shown later in the guide).
Drive strengths shown are for Corsair Dominator GT DDR3-2200 8-8-8-24 Rev 2.1 (Elpida Hyper)
These settings only need adjusting at high memory operating frequencies and/or for benchmarking purposes when chasing every last MHz. Not all memory modules will respond to drive strength adjustments as there are sometimes issues related to timing mismatches that cannot be corrected by changing drive strengths.
Signal line reflections due to impedance mismatches can cause instability at high operating frequencies. Drive strength setting manipulation increases or decreases buffer output current on the associated signal lines and may help increase bus clocking margins when setup correctly.
It is imperative that we change only one setting at a time (for both channels), monitor for changes and then make further adjustments. Do not change any other overclocking related settings or voltages in unison with drive strength adjustments. Voltage and bus adjustments should only be made after assessing the impact of a change to any of the parameters in the drive strength setting menu.
Use HCI Memtest first to assess changes in failure times to gauge the impact on stability before moving on to Prime95 or Linpack. Time spent on Memtest will reduce drive strength change requirements during higher levels of bus IO. Once Memtest is stable, a change of only one~two steps may be needed to obtain full memory module and memory controller stability.
CKE drive strength: Sets the drive strength for clock enable signals. This line doesn’t transition as much as the other lines, however, that is not to say that this setting is not important. GSkill Flare 4GB DIMMs prefer a manual setting of 1.5X, while Corsair 2.1 revision Hyper prefer 1X. An incorrect value for CKE Drive Strength will result in a non-POST.
CS/ODT: Chip Select/On Die Termination drive strength, sets the output buffer impedance for chip select signal lines and termination drive strength. 4GB DIMMs may benefit from a setting of 1.5X over DDR3-2000. For Elpida Hyper try 1X.
ADDR/CMD Drive Strength: Sets the output buffer impedance for Address and Command signal lines. A setting of 1.25X is preferred by GSkill Flare DDR3-2000 4GB modules. For Elpida Hyper based modules try 1X.
MEMCLK Drive Strength: A setting of 1.5X suits most modules and is the default setting – change only after experimenting with all other drive strength settings first.
DATA Drive Strength: Sets the buffer drive strength for the DQ lines. A setting of 1.25X is preferred by GSkill Flare 4GB modules. For Elpida Hyper try 1X.
DQS Drive Strength: Sets the buffer drive strength of the DQS lines. At high memory frequency try a setting of 0.75X on GSKill Flare and Elpida Hyper. Other Elpida die favor 1X~1.25X
Processor ODT: Sets on die termination resistance of the processor transceiver stages – can be left on AUTO under all normal operating conditions.
Last edited by Raja@ASUS; 06-12-2011 at 07:23 AM.
CPU Load Line Calibration: The “Regular” option sets a margin between the user set voltage and the actual voltage to ensure that the real-time voltage level does not breach (overshoot) the set VID by longer than AMD specifications.
Medium and High, set a tighter margin between the idle and full load voltage, so that idle voltage does not need to be ramped excessively to meet full-load voltage requirements when the processor is overclocked.
Ultra-High and Extreme may over-volt past what you’ve set in BIOS in an attempt to ensure that the voltage does not sag below the applied voltage for a long duration (when the VRM is faced with a heavy load). Auto is currently configured to default to Extreme LLC, so it's an idea to set Medium or High manually if you do not want any visible over-voltage.
We prefer to use the Medium setting for most overclocking as it seems to compliment the transient response of the VRM (Vdroop on the Medium setting is around 0.05V on the current BIOSes).
CPU/NB Load Line Calibration: As above but for CPU/NB Voltage.
CPU Voltage Over-Current Protection: Extends the current trip threshold before the CPU VRM will shut-down. Increase to facilitate overclocking (higher current draw).
CPU/NB Voltage Over-Current Protection: Extends the current trip threshold before the CPU VRM will shut down. Increase to facilitate overclocking (higher current draw).
CPU PWM Phase Control: Sets phase shut-off parameters for power saving. “Standard” and “Optimized” are adequate for most loading conditions. If pushing processors past 4GHz, then Extreme or “Manual” with “Ultra-Fast” is recommended.
VRM Over Temperature Protection: Sets a thermal limit that will shut the CPU VRM off should FET temperatures go too high.
CPU Voltage Frequency: Sets the switching frequency of the power FETs supplying processor Vcore. Lower switching frequencies lead to a higher VRM efficiency (small power saving) and lower VRM operating temperatures. Setting a higher switching frequency aids transient response (the recovery of voltage to the applied level after a load condition) – at the expense of heat.
CPU PWM Mode: Sets the conditions for load balancing across phases. “T.Probe” monitors phase thermal conditions and balances load accordingly. “Extreme” balances the current load across all FETs irrespective of thermal conditions.
Extreme OV: Enables higher voltage selection scales in UEFI BIOS – typically, for use with sub-zero processor cooling.
CPU & CPU/NB Voltage Mode: Manual Mode allows us to set a “static” value for CPU and CPU/NB voltage respectively. Offset mode, allows us to offset the base voltage by either subtracting or adding voltage to the base value.
CPU Manual Voltage: Adjust as necessary to facilitate overclocking/underclocking.
CPU/NB Manual Voltage: Adjust as required when overclocking memory or memory controller frequency. As a general rule, try adjusting drive strengths before adjusting CPU/NB Voltage, as correcting signal integrity issues is always preferred over the brute force approach of increasing voltage.
CPU VDDA Voltage: This rail does not normally need adjustment unless running very high bus clocks – up to 2.6V or so can help.
DRAM Voltage: Sets memory voltage – adjust according to DRAM timing requirements.
VDDR: Most of the time this voltage can be left at default. Voltage adjustments to this rail can help stabilize clocks but can also induce instability. This rail should be adjusted as a last resort only.
DRAM VREF DQ: Sets the DRAM DQ reference voltage which is generally 50% of VDDQ. Changes are only needed when running very high memory clocks or when processors are sub-zero-cooled. Don’t stray far from 50%.
DRAM VREF CA: Sets the DRAM Command and Address line voltage reference, again base is 50% of (VDD). For the most part VREF CA can be adjusted in tandem with VREF DQ, although the Command and Address lines are less prone to issues due to fewer transitions than the DQ lines.
DRAM VREF CA on CPU: Adjusting this reference voltage can help increase stability during stress tests - if rounding errors are reported above or below the expected numerical value (Prime95, Super Pi). If the rounding error shows a value higher than expected was returned from DRAM, then increase the Vref to 50.5% and see if it helps. The same principal can be used to lower Vref if the value returned is lower than the expected value.
The following pictures show how reference voltages interact with logic sampling:
Adjustment is recommended only after CPU voltage, DRAM voltage, CP/NB Voltage, memory timings and drive strength settings have been optimised.
DRAM Voltage Switching Frequency: Sets the on/off switching frequency of the DRAM VRM. A setting of 2X provides a faster transient response (recovery from load conditions) at the expense of power consumption and heat. For overclocking past DDR3-2000, or if using 4GB DIMMs at speeds higher than DDR3-1600, a setting of 2X is advised.
DRAM Over-Current Protection: Sets a trip threshold to shut down the VRM if excessive current is drawn from the VDIMM VRM. Set to disabled if overclocking with 4GB DIMMs at speeds over DDR3-2000.
NB Voltage: Sets the voltage for the external Northbridge (on the motherboard). For most overclocking this voltage can be left on AUTO. Increase only as a last resort when no other setting helps improve 3D benchmark/game stability.
NB HT Voltage: Sets Northbridge HT IO voltage. For maximum HT speeds (board and processor dependent), a setting of 1.275~1.30V is advised as a
NB 1.8V Voltage: Change only as a last ditch attempt to stabilise high HT bus clocks.
VDD PCIE: Sets PCIe IO voltage. Can be left at stock for most configurations. If running multiple graphics cards that are heavily overclocked then increasing voltage to this rail slightly may help stability.
SB Voltage: Sets Southbridge voltage – we leave this voltage at stock, and have not yet seen a need to increase voltage to this rail (sub-zero benchmarking at high bus clocks may require a slight voltage bump to this rail if any corresponding IO voltages on the board are ramped excessively).
NB Voltage Switching Frequency: Sets the on/off switching frequency of the Northbridge VRM. A setting of 2X provides a faster transient response (recovery from load conditions) at the expense of power consumption and heat.
NB 1.8V Voltage Switching Frequency: Sets the FET on/off switching frequency of the Northbridge VRM. A setting of 2X provides a faster transient response (recovery from load conditions) at the expense of power consumption and heat.
VDD PCIe Switching Frequency: Sets the FET on/off switching frequency of the PCIe VDD VRM. A setting of 2X provides a faster transient response (recovery from load conditions) at the expense of power consumption and heat.
Components Used to help test for this guide:
AMD 1100T Thuban CPU
Corsair HX1200 PSU
Corsair H70 CPU Cooler
Corsair Dominator GT DDR3-2200 8-8-8-24 4GB rev 2.1 Memory Kit
GSkill Flare 8-9-8-24 DDR3-2000 8GB Memory Kit
Big thanks to Corsair and GSkill!
Okay, I just got my Asus Crosshair V formula board in yesterday and got a question on which or both 8 pin or 4 pin am I suppose to be using? I was having issues with my computer after motherboard install of freezing mostly and occasional blue screen or black screen hanging while trying to POST and initially plugged in both 8 pin and 4 pin power connectors. Did I mess up my motherboard or did I end up getting a defective motherboard?
That being said, I seriously doubt you could hurt anything by connecting it reguardless of the GPU config.
I'd reseat and double check everything first. Then make sure it runs stable at stock speeds before I'd
assume the board was bad...
It can happen, but it's usually some silly user error in the end.
AMD FX-8350 (1237 PGN) | Asus Crosshair V Formula (bios 1703) | G.Skill 2133 CL9 @ 2230 9-11-10 | Sapphire HD 6870 | Samsung 830 128Gb SSD / 2 WD 1Tb Black SATA3 storage | Corsair TX750 PSU
Watercooled ST 120.3 & TC 120.1 / MCP35X XSPC Top / Apogee HD Block | WIN7 64 Bit HP | Corsair 800D Obsidian Case
First Computer: Commodore Vic 20 (circa 1981).
Thanks for all the info in this thread! Much appreciated and is very helpful.
I have tested board in 8 pin only and 4 + 8 configs, If you have a 4 pin as well it can't hurt to use both, how3ever under normal operation conditions air/water 8 pin should be sufficient with currently available hardware.
I do recall someone saying it only feeds IMC power, I think that may be a tad innacurate.......I could however be wrong.
For every beginning there must be an end
Will be around to help with AMD specific hardware till shortly after BD launches, after that I'm Ghost
lots of good info here, thanks raja
LEO!!!! amd phenom II x6 1100T | gigabyte 990fxa-ud3 . . 2x2gb g.skill 2133c8 | 128gb g.skill falcon ssd sapphire ati 5850 | x-fi xtrememusic. . . samsung f4 2tb | samsung dvdrw . . corsair tx850w | windows 7 64-bit. ddc3.25 xspc restop | ek ltx | mc-tdx | BIP . . lycosa-g9-z2300 | 26" 1920x1200 lcd .
I have been tilting towards the Giga UD3, but i must say, you have tilted me the other way. Ofc i have to se abit more before deciding. But as it looks right now, CV formula is the board i want!
Gigabyte 890gpa-ud3h v2.1
HD6950 2GB swiftech MCW60 @ 1000mhz, 1.168v 1515mhz memory
Corsair Vengeance 2x4GB 1866 cas 9 @ 1800 18.104.22.168.41 1T 110ns 1.605v
C300 64GB, 2X Seagate barracuda green LP 2TB, Essence STX, Zalman ZM750-HP
DDC 3.2/petras, PA120.3 ek-res400, Stackers STC-01,
Dell U2412m, G110, G9x, Razer Scarab
I have the CHV and it periodically looses my 60GB OCZ Agility 3 SSD during a soft reset. I have to power off/on the computer so the board finds the SSD again. Anyone have any ideas?
As quoted by LowRun......"So, we are one week past AMD's worst case scenario for BD's availability but they don't feel like communicating about the delay, I suppose AMD must be removed from the reliable sources list for AMD's products launch dates"
If your post is non-technical in nature and I do not reply most likely you have joined other trolls in my ignore list.