Page 38 of 61 FirstFirst ... 283536373839404148 ... LastLast
Results 926 to 950 of 1518

Thread: Official HD 2900 Discussion Thread

  1. #926
    Xtreme Enthusiast
    Join Date
    Jan 2006
    Posts
    569
    Spent all last night building two pc's and today will be spent overclocking/stress testing, sorry for no watercooling anymore, and tomorrow is mothersday... Will have benchies on Monday, just like everybody else, hehe - patience my young padawans!
    Intel Core i7 920 @ 3.8GHz (183x21)
    Gigabyte EX58-DS4 BIOS F5
    3GB PATRIOT PC3-10666 DDR3
    Sapphire Radeon HD4870 512MB BLACK
    2x500GB SEAGATE SATA-II 7200.11
    OCZ GameXstream 750W PSU
    Antec Three Hundred Chassis

  2. #927
    Xtreme Addict
    Join Date
    Aug 2006
    Location
    The Netherlands, Friesland
    Posts
    2,244
    Quote Originally Posted by w0mbat View Post
    Windows Vista Direct X10 SDK

    PIPEGS

    2900XT 159 fps
    88GTS 34 fps
    88GTX 63 fps

    CubemapGS, Car, Instancing

    2900XT 23 fps
    88GTS 9 fps
    88GTX 11 fps

    Cubemap, Car, Instancing

    2900XT 18 fps
    88GTS 10 fps
    88GTX 11 fps
    ROFLMAO, i think this confirms that G80 is more DX9 based, and R600 more DX10 based
    I'm a bit surprised that G80 gets raped by so much
    Last edited by ownage; 05-12-2007 at 09:13 AM.
    >i5-3570K
    >Asrock Z77E-ITX Wifi
    >Asus GTX 670 Mini
    >Cooltek Coolcube Black
    >CM Silent Pro M700
    >Crucial M4 128Gb Msata
    >Cooler Master Seidon 120M
    Hell yes its a mini-ITX gaming rig!

  3. #928
    XS News
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,010
    So its more then twice as fast as GTX in DX10 ?
    Cool =)
    If this is correct then i dont think the slower performance in DX9 matters cuz its fast enough to handle those games.
    Maybe thats why the 8800 hit the street so fast, they just didnt care about DX10 cuz they wanted to sell as many cards as possible before the shift to DX10.

    Just what im thinking might be wrong.
    Last edited by Ubermann; 05-12-2007 at 09:18 AM.
    Everything extra is bad!

  4. #929
    Xtreme Addict
    Join Date
    Nov 2005
    Location
    UK
    Posts
    1,074
    Quote Originally Posted by Baron View Post
    On Scan.co.uk they have X2900 in their graphics card section.

    http://www.scan.co.uk/Products/Produ...=160&OrderBy=4




    Though there are non up yet.
    Seems to have gone.

    i7| EX58-EXTREME | SSD M225 | Radbox | 5870CF + 9600GT

  5. #930
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by Ubermann View Post
    So its more then twice as fast as GTX in DX10 ?
    Cool =)
    If this is correct then i dont think the slower performance in DX9 matters cuz its fast enough to handle those games.
    Maybe thats why the 8800 hit the street so fast, they just didnt care about DX10 cuz they wanted to sell as many cards as possible before the shift to DX10.

    Just what im thinking might be wrong.
    Let's just see how games fair in DX10, it's funny, strange scores in games with the R600 reviews everyone says : DRIVER DRIVER! And here is nVidia having problems, and everyone goes! Told you so! G80 sucks in DX10! j/k

    Just wait till the 15th, Lost planet DX10 demo will be here, and is no doubt already in some of the R600 reviews.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  6. #931
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Let's just see how games fair in DX10, it's funny, strange scores in games with the R600 reviews everyone says : DRIVER DRIVER! And here is nVidia having problems, and everyone goes! Told you so! G80 sucks in DX10! j/k
    Maybe because the R600 has 320 stream processors
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  7. #932
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by alexio View Post
    Maybe because the R600 has 320 stream processors
    Which doesn't mean anything at all, afaik they can't be compared directly to nVidia's, but atleast it should show some benefit in DX9, it clearly does not.

    I mean, I really really really want super performance in DX10.....but I don't think it will happen. Pessimistic maybe, but at least then I won't feel too bad if it turns out to be not so good in DX10 as well.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  8. #933
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Which doesn't mean anything at all, afaik they can't be compared directly to nVidia's, but atleast it should show some benefit in DX9, it clearly does not.
    Why should it? In DX9 a static config of vertex/pixel configuration in choses by the driver. I think you can imagine that a lot of tweaking / game / graphics settings can be achieved. I for one am convinced that just changing the name of the executable of a game to 3mark06.exe (for example) changes the performance of the card in that particular game.

    I'm expecting the 64*5 shader config of the R600 to be more powerful than the simpler 128*1 config of the G80, but I have no facts to back it up. It's just that ATI probably knows more about unified architecture design now than Nvidia did many month ago before releasing the G80. If 64*5 wasn't faster then ATI would have chosen the simpler 128*1 design.
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  9. #934
    Xtreme Addict
    Join Date
    May 2003
    Location
    Hopatcong, NJ
    Posts
    1,078
    What's important to me is the present. DX10 benches mean nothing when games are not even out. By the time Crysis and other DX10 games roll out, the R650 on 65nm will be out, which will reduce costs, be cheaper to manufacture, and have less power consumption......so I don't exactly see the point in jumping the gun and spending for an R600 which allegedly has awesome DX10 performance. DX9 wise, its not much better at all than the x1900xtx. thats a dissapointment

    'Gaming' AMD FX-6300 @ 4.5GHz | Asus M5A97 | 16GB DDR3 2133MHz | GTX760 2GB + Antec Kuhler620 mod | Crucial m4 64GB + WD Blue 2x1TB Str
    'HTPC' AMD A8-3820 @ 3.5GHz | Biostar TA75A+ | 4GB DDR3 | Momentus XT 500GB | Radeon 7950 3GB
    'Twitch' AMD 720BE @ 3.5GHz | Gigabyte GA-78LMT-S2P | 4GB DDR3 | Avermedia Game Broadcaster

    Desktop Audio: Optical Out > Matrix mini DAC > Virtue Audio ONE.2 > Tannoy Reveal Monitors + Energy Encore 8 Sub
    HTPC: Optoma HD131XE Projector + Yamaha RX-V463 + 3.2 Speaker Setup

  10. #935
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by alexio View Post
    Why should it? In DX9 a static config of vertex/pixel configuration in choses by the driver. I think you can imagine that a lot of tweaking / game / graphics settings can be achieved. I for one am convinced that just changing the name of the executable of a game to 3mark06.exe (for example) changes the performance of the card in that particular game.

    I'm expecting the 64*5 shader config of the R600 to be more powerful than the simpler 128*1 config of the G80, but I have no facts to back it up. It's just that ATI probably knows more about unified architecture design now than Nvidia did many month ago before releasing the G80. If 64*5 wasn't faster then ATI would have chosen the simpler 128*1 design.
    Hmm, although I'm not really that technical, shouldn't we see even with a fixed config some degree of that stream processing power? And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud. Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?

    Quote Originally Posted by Miwo View Post
    What's important to me is the present. DX10 benches mean nothing when games are not even out. By the time Crysis and other DX10 games roll out, the R650 on 65nm will be out, which will reduce costs, be cheaper to manufacture, and have less power consumption......so I don't exactly see the point in jumping the gun and spending for an R600 which allegedly has awesome DX10 performance. DX9 wise, its not much better at all than the x1900xtx. thats a dissapointment
    Well I wouldn't say that, people have been waiting very long, so I don't think they want to wait for R650, I'm not going to that's for sure. Anyway, the 2900XT is much much better then the X1900XTX in a lot of benches, sure in some the difference is small, but the GTS for instance seriously beats the X1900XTX around in just about every game.

    And I think Crysis is nearer then everyone thinks, I put my money on late June for a playable demo, R650 will be September like, which is another 4,5!! months from now.
    Last edited by Tim; 05-12-2007 at 10:25 AM.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  11. #936
    Muslim Overclocker
    Join Date
    May 2005
    Location
    Canada
    Posts
    2,786
    I think the problem is the r600 is very well geared for vector calculations, hence the 4+1 capability of a single shader unit. What happens when you are working with a scalar? Well if you don't do any optimizations, you will be wasting away power if you assign one scalar instruction per shader. Meaning, with better bios/drivers (I don't know what is more at work here), you would assign mutually exclusive operations to the same shader, and in a perfect world, you wouldn't move on to another shader unit until they are all fully utilized maximizing performance. What about if you have a vector, and 4 shaders available, or 2 shaders half in use? How well can they do that? These predictions and optimizations will not be easy to complete. That is why with better drivers, more shaders will be more fully utilized and instructions done in fewer cycles.

    I also have a feeling this has has to do with the way dx9/opengl optimize for scalar designs.
    Last edited by ahmad; 05-12-2007 at 10:29 AM.

    My watercooling experience

    Water
    Scythe Gentle Typhoons 120mm 1850RPM
    Thermochill PA120.3 Radiator
    Enzotech Sapphire Rev.A CPU Block
    Laing DDC 3.2
    XSPC Dual Pump Reservoir
    Primochill Pro LRT Red 1/2"
    Bitspower fittings + water temp sensor

    Rig
    E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB


    I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.



  12. #937
    Registered User
    Join Date
    Feb 2006
    Posts
    98
    nVidia isn't 128 * 1 it's 128 * 3 (not sure about the 3, fairly sure it is not * 1). Also nVidia's 128 runs faster.

    Last rumors I heard ATi was a pure 320 * 5 instead of 320 * 4+1.

    This is so much useless fun!

    In some other thread around here you can see the ATi gets crushed in the G80 optimized DX10 shadow variance maps benchmark (which is out already) by a factor 3. App optimizing will remain importante I fear :x

  13. #938
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    nVidia is 128 1+1 or 2+1. cant remember(think its 1+1). They run at those 1.35Ghz or so.
    AMD is 64 4+1 that runs at ~750Mhz.

    320 is a PR gimmick.
    Crunching for Comrades and the Common good of the People.

  14. #939
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by w0mbat View Post
    Driver: 8.37-4, CPU: E6700, from europs biggest PC-Magazine ct

    Oblivion, HDR on

    1280 X 1024 noAA/noAF

    2900 XT 49 fps
    88 GTS 48 fps
    88 GTX 48 fps

    1280 X 1024 AA 4X / noAF

    2900 XT 39 fps
    88 GTS 46 fps
    88 GTX 48 fps

    1280 x1024 AA 8X / noAF

    HD 2900 XT 17 fps
    88 GTS 28 fps
    88 GTX 39 fps


    PREY

    1600 x 1200 AA 8X / AF 16X

    HD 2900 XT 43 fps
    88 GTS 37 fps
    88 GTX 50 fps


    Windows Vista Direct X10 SDK

    PIPEGS

    2900XT 159 fps
    88GTS 34 fps
    88GTX 63 fps

    CubemapGS, Car, Instancing

    2900XT 23 fps
    88GTS 9 fps
    88GTX 11 fps

    Cubemap, Car, Instancing

    2900XT 18 fps
    88GTS 10 fps
    88GTX 11 fps
    Someone's using an old 8800 driver.... I can tell you that at the office we've seen very very good DX10 numbers out of the G80.

    Quote Originally Posted by Shintai View Post
    nVidia is 128 1+1 or 2+1. cant remember(think its 1+1). They run at those 1.35Ghz or so.
    AMD is 64 4+1 that runs at ~750Mhz.

    320 is a PR gimmick.
    Exactly. Technically, going by AMD/ATi's math, NVidia has 256 shaders going at close to double the speed of ATi's. I love it how people keep pointing out "but the R600 has 320 shaders", when in reality, it only has 64. The total package won't be able to be used in a lot of situations, as only the most complex shaders will have the ability to take advantage of them.
    Last edited by DilTech; 05-12-2007 at 10:43 AM.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  15. #940
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Hmm, although I'm not really that technical, shouldn't we see even with a fixed config some degree of that stream processing power?
    Okay a little explanation then

    For this example I use the NV40 (6800 series architecture). This chip had 16 pixel pipelines and 6 vertex pipelines. In some games a 12/10 config might have been faster, but the ratio was fixed so this could not be changed by the driver. To use an easy number, lets say the R600 has 64 pipelines that can perform vertex operations aswell as pixel operations. DX9 can only use a static configuration so these 64 pipelines are either told by the driver to act as a vertex pipe or pixel pipe.

    A possible config is 32/32 pixel/vertex. However, some game may benefit from more pixel processing power and doesn't need so much vertex processing power. The ratio can be changed by the driver to 48/16 yielding much better performance in that particular game.

    To makes this change the driver has to "know" what setting to use for each individual game (and maybe even game settings) to use anything other that the standard fixed ratio that I expect the driver to set if it doesn't recognize a game. This way the R600 performance can be tweaked per game or game settings. It takes a lot of testing to tweak the R600 performance for all the DX9 games that are currently out. Nvidia had the time to do this really well with the G80, ATI had a lot less time.

    And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud.
    Maybe so, but it's as good of a guess as mine. We have no idea when the R600 and G80 designs were finalized. It's critical to have those facts to make a good judgement. My guess is based on the fact that ATI has had more time to change the design of the R600 whilst working in parallel with game devs working on DX10 titles.
    Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?
    Yes, in the exact same game with the same graphical settings a game should run at higher framerates with DX10. However the difference may be very small due to DX9 vertex/pixel ratio tweaking.
    Last edited by alexio; 05-12-2007 at 10:56 AM.
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  16. #941
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Thank you very much for taking the time to explain it, I did understand that more or less, but....am I right in saying that whatever makes the card choose the vertex/pixel shaders on the fly in a DX10 game is the most crucial bit....is it called a scheduler? If that sucks, then it could very well mean that the R600 could beat the G80 right? Or vice versa.

    Thanks again, I try to read up on stuff like that, but most explanations still go way over my head unfortunately. I'm more of a gamer.
    Last edited by Tim; 05-12-2007 at 10:59 AM.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  17. #942
    Xtreme Enthusiast
    Join Date
    May 2006
    Location
    over the rainbow
    Posts
    964
    Quote Originally Posted by DilTech View Post
    Someone's using an old 8800 driver.... I can tell you that at the office we've seen very very good DX10 numbers out of the G80.
    [...]
    No, it was the newest driver.
    AMD Phenom II X6 1055T@3.5GHz@Scythe Mugen 2 <-> ASRock 970 Extreme4 <-> 8GB DDR3-1333 <-> Sapphire HD7870@1100/1300 <-> Samsung F3 <-> Win8.1 x64 <-> Acer Slim Line S243HL <-> BQT E9-CM 480W

  18. #943
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by w0mbat View Post
    No, it was the newest driver.
    Hey W0mbat, you have some inside info on VR-Zone's article, which has a page called DX10 demo's, do you have any idea which ones we can expect, and can you maybe unveil which is better in those DX10 Demo's? R600 or G80?

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  19. #944

  20. #945
    Xtreme Enthusiast
    Join Date
    May 2006
    Location
    UK
    Posts
    706
    good news on the power connectors front as kinc says you can use 2 6 pin pcie connectors if you don't have the 8 pin available and i think we can safely assume kinc knows, if you no what i mean

    heres the link

    http://www.nordichardware.com/forum/...=8433&forum=28





  21. #946
    Xtreme Enthusiast
    Join Date
    May 2006
    Location
    over the rainbow
    Posts
    964
    and weŽll see some game benches tomorrow
    AMD Phenom II X6 1055T@3.5GHz@Scythe Mugen 2 <-> ASRock 970 Extreme4 <-> 8GB DDR3-1333 <-> Sapphire HD7870@1100/1300 <-> Samsung F3 <-> Win8.1 x64 <-> Acer Slim Line S243HL <-> BQT E9-CM 480W

  22. #947
    Xtreme Addict
    Join Date
    Nov 2005
    Posts
    1,084
    Quote Originally Posted by DilTech View Post
    Someone's using an old 8800 driver.... I can tell you that at the office we've seen very very good DX10 numbers out of the G80.


    Show those fabulous numbers in DX_10, because anyone has those numbers....

    The drivers used in the 8800 were the latest.

  23. #948
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Thank you very much for taking the time to explain it, I did understand that more or less, but....am I right in saying that whatever makes the card choose the vertex/pixel shaders on the fly in a DX10 game is the most crucial bit....is it called a scheduler? If that sucks, then it could very well mean that the R600 could beat the G80 right? Or vice versa.
    Yes, the scheduler is important. In the G80 it runs at half the shader clock, so there are latencies involved here. The scheduler assigns a thread to a streaming processor. Of course it's more efficient to have the scheduler run at the same clock as stream processors. I'm not sure how big the difference is though. If the scheduler can assigns two thread to one processor then the penalty is rather small. I don't know enough about the G80 architecture to tell you about the latencies involved.

    I have no information at all regarding the scheduler of the R600 to tell you how it is compared to the one in the G80. The architectures are so complex and different from each other that from specs alone it's hard to pick the better one. Only benches will tell the truth after both companies have had time to optimize for DX10 (and ATI for DX9).
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  24. #949
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by alexio View Post
    Yes, the scheduler is important. In the G80 it runs at half the shader clock, so there are latencies involved here. The scheduler assigns a thread to a streaming processor. Of course it's more efficient to have the scheduler run at the same clock as stream processors. I'm not sure how big the difference is though. If the scheduler can assigns two thread to one processor then the penalty is rather small. I don't know enough about the G80 architecture to tell you about the latencies involved.

    I have no information at all regarding the scheduler of the R600 to tell you how it is compared to the one in the G80. The architectures are so complex and different from each other that from specs alone it's hard to pick the better one. Only benches will tell the truth after both companies have had time to optimize for DX10 (and ATI for DX9).
    Dankjewel for the uitleg. (Thanks for the explanation)

    Much appreciated...I guess we'll just have to wait and see.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  25. #950
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Dankjewel for the uitleg. (Thanks for the explanation)

    Much appreciated...I guess we'll just have to wait and see.
    Graag gedaan = no problem
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

Page 38 of 61 FirstFirst ... 283536373839404148 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •