Don't the pixel pipelines handle shader processes?Quote:
Originally Posted by Lightman
Printable View
Don't the pixel pipelines handle shader processes?Quote:
Originally Posted by Lightman
What HSF mounting holes does the X1600 have? typical ATI or the 7800/X1800 holes?
I was interested in a x1600xt but I plan on sticking with my agp setup until Summer for amd 65nm + M2 and R600.
SO I snagged a x800xt AIW for 270$ :)
WIsh they made a x1600xt agp for me to pick for 100$ less. but performance ill be better on the older r420.
ATi is misleading. :p: btw found this card for $159!
anyone seen this mysterious "external pci-e to agp bridge connector" that's supposed to make x1600xt's agp??
I think some company made something like that...it wasn't targeted the X1600 though...I don't remember who made it thoughQuote:
Originally Posted by lawrywild
nah, I mean on connect3d's (and another manufacturer I cant remember) site, it states "AGP 8x configurations also supported with AGP-PCI-E external bridge chip"Quote:
Originally Posted by jjcom
check it down at the bottom:
http://www.connect3d.com/products/pcie_x1600xt.htm
Oh, that. There is a chip that can be used to make PCI-E dies work with AGP. Ralito chip? I don't quite remember the name.
Albatron made an actual adaptor
Guys, I'm going to try and find the diagrams I was talking about tonight. Then maybe someone could clarify the situation for me. ;)
Whatever the pipeline configuration, seems like the X1600pro for around 120-130 USD is a pretty good buy.
nah, Albatron made an agp to pci-e, so that you could use agp in pci-e mobo.. :rolleyes:Quote:
Originally Posted by sabrewolf732
I'm pretty sure it's not rialto because it says external..
It is Rialto ;) External just means its an actual chip thats soldered on.
Perkam
Oops :slap:Quote:
Originally Posted by lawrywild
I dont understand how its the X1700 going to be 24 pixel pipelines, 8 ROPs, 8 Texture Units, supposedly 256bit bus, if the r580 its 16pp??
Anyone could pot the source of what its going to be the x1700? please
Thanks!
standard ati adjacent to the gpu pcbQuote:
Originally Posted by vapb400
I heard the R580 is going to be 16x3 I believe and the X1700 to be 8x2. I think, but it's all speculation and rumor.
http://www.svc.com/avc-at3.html
http://www.svc.com/avc-at4-r1.html
http://www.svc.com/avc-at2.html
which one fits it :stick:
http://www.svc.com/avc-at4-r1.html
I think this one will...at least near as I can tell.
cool, btw:
http://www.ewiz.com/detail.php?p=AT-...76193f5f440659
$20 cheaper than newegg :banana: :toast: $320 10K+ 05 score. WIth my budget Ill be able to get a x1600pro from egg, but I will have no cpu :( Why aren't there 939 semprons yet!
What are the clocks on that sapphire? The specs aren't on Sapphire's site ?
....and newegg is completely blank, no products are showing up ? WTF?
newegg is sold out
Remember s754 semprons came along after 939 was in full swing. I'm sure 939 semprons will come out as M2 is released in second half of 2006.Quote:
Originally Posted by sabrewolf732
Btw...Nice link !!!! :eek: $160 for an X1600XT...Very Nice...I'd say go for it :up:
Perkam
The X1300 Non-Pro's will own the day. With proper setups, the X1300 Non-Pros at 90$ a piece, Xfired, can be OCed to post-Pro stock speeds...
No... really ?? I was SO sure the x300's were gonna make a comeback...Dang I'm so out of touch with this stuff...:p:Quote:
Originally Posted by bassv2
Perkam
I think 939 Semprons are for large OEM's only (HP, Compaq etc) So they dont have to have a bunch of different motherboards for their low/high end models. They can all have the basic 939 board.
HEHE That 1600XT from ewiz is very very very very tempting.
I would if I had the dough, sadly im at a $50 deficit if I go with it and the system I'm planning. Maybe Ill go with 256MB ram and swing it :stick: Unless someone can find me ram really cheap
WAit for the X1300 Volt mod... :cool:Quote:
Originally Posted by perkam
lol 2x 1300pros < x1600xt :p:Quote:
Originally Posted by bassv2
Leaves you ample cash to buy this and complete the combo: http://www.newegg.com/Product/Produc...82E16835118117 :slobber: .Quote:
Originally Posted by vapb400
LOL $160 X1600XT + $26 VF700 = $186 ... still $20 cheaper than GS :p:
Perkam
2x 1300 non-pros volt modded > x1600xt oced.Quote:
Originally Posted by sabrewolf732
price wise and preformacen wise
x1600xt is still cheaper and can also be volt modded :fact:Quote:
Originally Posted by bassv2
x1300 non-pro's xfired is cheaper by 10$ :) X1300 being crappier has more headroom for oc.
let time tell my friend.... the 6200 was supposed to be crap ;)
hehe I've already got a VF700 just sitting here!:fact: :fact: :fact:Quote:
Originally Posted by perkam
Hopefully will place my order this week. I need to do a little more xmas shopping before I treat myself :)
can anybody confirm the sapphire clocks is it the 600/1400 of the powercolor or the 590/1380 reference clock? Or is that not the reference clock anymore?
really? a x1600xt is $160. Where are you getting x1300's for $75?Quote:
Originally Posted by bassv2
Sapphire does not have X1600XTs with anything other than 1.4ns GDDR3.Quote:
Originally Posted by vapb400
Perkam
if I told you, then u'd start loading up on em, and price would go up. :(Quote:
Originally Posted by sabrewolf732
i'd still pick an X1600pro over the X1300 series... might as well run a 9700/9800 if you want to use an x1300
I totally agree. After playing with a couple X1300 pros, I have to say that the performance will not satisfy most gamers. They are fun to play with and interesting from a technology standpoint but that's about it.Quote:
Originally Posted by STEvil
On a positive note, they are a step up from the X300 / X550 / X600s.
http://www.theinquirer.net/?article=28422
XFire X1600XT at stock beats stock X1800XL at 3dmark05 :)
8730 :)
71% increase. NB. So if you get both up to 6500 pt clocks you should be at around 11k, not too shabby. Of course thats not considering other bottlenecks. Im sure with water/volt mod you can hit 12k :fact:
Not bad at all. But quiet a few stock XL's can break 9k without to much fuzz ;)Quote:
Originally Posted by Shadowmage
X1600XT is also at stock, remember ;)
SOMEONE HURRY UP AND GET ONE! :D
... I mean two ;)
huh? i dont think so...Quote:
Originally Posted by bassv2
and they are in no way cheaper!
a 1600pro here costs 125€ and a 1300pro costs 120€ and a 1300non pro costs 100€
theres the asus 1300 non pro wich is supposedly going to cost only 75€ but its out of stock and nobody knows when they finally arrive.
so those would be a better deal than getting a 1600xt...
but im not too sure... the 1300non pros from asus probably come with really crappy memory and wont oc a lot.
a 1600xt scores 9200 in 2k3 and 5200 in 2k5
can two 1300 no pros in cf oced beat that?
edit: althes got 4500 in 2k5 with 2 1300pros and 1.8ghz a64, the other scores i posted were with 2.4ghz a64... so i guess 2 1300 non pros score as well as a 1600xt.
BUT the xt isnt oced, AND this is only in fm benchmarks where crossfire scales very well... in games they wont be as fast.
so 1600xt > 1300 non pro crossfire
but i still think that 1600pro crossfire > 7800gt
lets see :D
according to the shop my cards arrive in 2 days :banana:
Just hold on boys saaya will be rocking and rolling with xfire shortly
ATI's x1k series aren't a dissapointment.
They now take the Best Bang for the Buck title away from the 6600GT's.
And the late arrival of the X1600 created much hype...
well if ati wouldnt have lowered the price on the 1600s they would have been a big failure i think... and theres a big gap between 1600xt and 1800xl, a price and performence gap... i hope the 1700 will close the gap soon, ati has always been good with performence-mainstream cards, 9500pro, 9800pro/x700 x800xl...Quote:
Originally Posted by bassv2
but it looks like its some months away, so ill have some fun with x1600 til then :D
if there would be widely available 6600 non gt cards with sli and good memory then they would beat the crossfire 1600 performence and price performence wise i think.
there are 6600 non gt cards that are sli capable at newegg for 100$!
they have really lousy memory though i think...
and tbh i think the x1000 series are a dissapointment, not a bog one though.
btw, im almost always dissapointed in new hardware :P
the 1000s helped ati to catch up with nvidia, but not beat them.
im still confused how a 16 pipe chip can have 2x as many transistors as the previous generation 16 pipe chip :D
i was sure its 32 pipes and would perform at least 50% faster than the x850, and even faster with aa and af, but its only around 30% faster, in some games only 10% in 1024x768, not in high resolutions...
they are good cards though, dont get me wrong, but i expected more since in the past ati has made big jumps with their new cards.
Two X1600XTs get 8,730 in 3dmark05 stock:
http://www.theinquirer.net/?article=28422 (Btw, thats an X1800XT thats supposed to be mentioned, not XL --> Stupid Inq)Quote:
Originally Posted by TheInqy
That's 71% efficiency for the X1600XTs in crossfire relative to the X1800XTs w/ 50% efficiency in crossfire.
Perkam
ok well i just finished my WC loop and the 1600 is now idleing at 41c :) much better than 55 imho.
Scores to come later tonight as im leak testing and priming right now
:woot: Too many things Too fast...must buy !!! :p:
Perkam
A 6800 can't hold a candle to a x1600xt, let alone a 6600gt.Quote:
Originally Posted by saaya
6600gt->6800->x1600xt->6800gs. That's how the performance seems to be.
from what ive seen it looks like 1800 crssfire scales 60%+ in 2k3 but only 40% in 2k5? while the x850 crossfire scales 60%+ in 2k3 and 60% in 2k5... could 2.6ghz a64 already be limiting the 1800s in crossfire? :confused:Quote:
Originally Posted by perkam
i said 6600 sli, not a single 6600 or 6600gt.Quote:
Originally Posted by sabrewolf732
6600 sli is faster and a better bang per buck than a 1600xt is what i said.
as i said, there are 6600 non gt sli capable cards at newegg for 100$
a 1600xt beats a 6800gs? i dont know... at stock maybe... but those gs cards oc to insane speeds i heard.
and you guys have to keep in mind!
theres barely any game that scales even close to as well as 2k3 and 2k5 with sli/crossfire!
some games only gain 10%!
hmmm... Sapphire X1600XT's with 256mb GDDR3 for $196 here in Canada (ncix.com).
Too bad they are sapphire though, sapphire seems to have issues :(
Now that Ati's pricing structure has gone down for the X1600 it looks a pretty good bargain.
I wonder though whether the fast memory can make up for the 128 bit bus, especially in crossfire mode, compared to the, admittedly more expensive, 256 bit 6800GS?
Will be interesting to see. My feeling is it will be faster in 3dmark benchmarks but I am not so sure for games. Interesting compeition between the two.
Regards
Andy
How would a X1600XT perform compaired with the Connect3D X800GTO @ 16 pipes you think?
x800gto will be faster IMOQuote:
Originally Posted by stryg
x1600xt might oc better and end up faster, plus you have HDR+AA and sm3.0...
What you can't use...the card is too slow for anything...let alone HDR...the X1800XT is struggling with AA+HDR, and you think a $200 card can do it?Quote:
Originally Posted by lawrywild
And it doesn't overclock better since the 6800GS runs way over 500Mhz for the core
Sorry to bring this up again, guys. I'm still trying to get this straight though. :cool:
After re-reading all the X1600 reviews I could find (including the links in this thread), it seems to me that saying the R530 has a 4x3 configuration is incorrect.
Since the R530 has only 4 TMUs (obviously not enough to keep up with 12 pixel pipes), its theoretical maximum, multi-textured fill-rate should be the same as the R515. And of course the 4 ROPs won't keep up either.
Just because there will be a bottle-necked performance and 12 pipes not fully put to use, does not mean they are not there.
- Corrections or different interpretations? (As if anyone needed an invitation. :D )
Now that I think about it, maybe the R530 could be described in traditional GPU terms as: 12 x 1/3, 5vp :)
7 pages of thread, and still no scores :brick:
They're coming Protiv ;) Both from Saaya and Dillusion :up:
Perkam
it does? who says that? :confused:Quote:
Originally Posted by STEvil
the 16 pipe gto would def beat it!Quote:
Originally Posted by stryg
here, i was wondering the same so i compared the xbitlabs results from different reviews. the black bar on the x850xt score is the 1600xt score:
http://img105.imageshack.us/img105/3...otal7vc.th.gif
http://img105.imageshack.us/img105/3...otal7cl.th.gif
http://img105.imageshack.us/img105/7...pure1ek.th.gif
http://img105.imageshack.us/img105/7...andy2kk.th.gif
http://img105.imageshack.us/img105/4...pure7bn.th.gif
http://img105.imageshack.us/img105/4...andy6ja.th.gif
http://img105.imageshack.us/img105/9...pure5qf.th.gif
http://img105.imageshack.us/img105/6...pure2cy.th.gif
http://img105.imageshack.us/img105/6...andy5au.th.gif
http://img105.imageshack.us/img105/7...andy7xk.th.gif
http://img105.imageshack.us/img105/4...pure0jo.th.gif
http://img415.imageshack.us/img415/6...andy3vi.th.gif
can you explain it more detailed? didnt get it :DQuote:
Originally Posted by DrJay
cards were sent out today from the shop, should be here tomorrow :banana:Quote:
Originally Posted by Protiv
What?? The X1800XT only has 16 pixel pipelines!! Why would the X1700 have 24?? :stick:Quote:
Originally Posted by Shadowmage
Because the R580 will have 48....
Isn't it 48 shaders , but only 16pipes. ( 4 shaders per pipe) ?Quote:
Originally Posted by Always
Hi
I have to the test GECUBE Radeon X1600Pro 256MB mem 2.5ns default clocks is gpu 500MHz and memory 400MHz (800 DDR).
I overclocking this card only with air cooling and :brick: This card is not good for o/c. Higher clocks on gpu is 500@600MHz and memory 400@450MHz :mad: More MHz = artefacts on screen. Scores for this card is not good, riva tuner said me : 128bit RV530 4x1. Tested with Omega Catalyst 5.12, ATI Catalyst 5.13 beta and P4 2.66@3.92GHz.
Quote:
Originally Posted by L@TrO
emmmmmmm.....
3 shaders per pipe (3x16=48), same as x1600 but here we have 3x4=12
for x1700 it will have 24shaders (more than x1800) but 'only' 8ROPs and 8TMU (half of x1800) so preformance will depend on tasks (games) :D
PS. again x1700 is 3x8=24;)
SAAYA - i hope tomorrow you will give us some screenes from marks and pictures of cards :D :D
SAYYA will deliver the goods :toast:Quote:
Originally Posted by Lightman
:fact:Quote:
Originally Posted by Tim
Quote:
Originally Posted by DEVIL K-ce
apply my vgpu mod for this card!!
and how this is possible, x1600pro earlier in poland than in uk?? :stick: something is wrong ;)
PS 4x1 from riva tuner tells about ROPs (same as in x1300)
hope so as well :DQuote:
Originally Posted by Lightman
night shift in one hour, i get home at 6am, then i will sleep until the cards arrive and will post scores then :D
btw, im confused about the configuration of the vpus...
whats the first number and whats the second number exactly?
whats 1300s configuration? 2x2? 4x1? 1x4? :hm:
1600 is 4x3, right? or 3x4?
and 1700 will be 8x3? or 3x8?
r520? and r580?
x1300 is 32bit x 4 corssbarQuote:
Originally Posted by saaya
Quote:
Originally Posted by saaya
afaik, x1300 = 4x1, x1600 = 4x3, x1700 speculated to be 8x3, r520 = 16x1, r580 speculated to be 16x3.
that's RIGHT sabrewolf732 :woot: :clap: :clap: :clap: :woot:
and xfire on r580 you will end .... with 96 programmable shader pipes; 32ROPs; 32 TMUs; 16 (or 20) programmable vertexs; ... :slobber: :slobber: :D :D :banana: :banana:
so here it is, my OC on stock voltage with water (albeit warm water :D)
http://dillusion.net/pics/x1605.jpg
if you cant see, thats 658/819(1638). I would like to know how to raise the core voltage thru ati tool, i dont see the option anywhere.
Nvidia just got pwned. The days of the 9800 are BACK!Quote:
Originally Posted by Lightman
The X1k series were just dumies to see how well the new architecture works out. Preforms well with little pipelines and now it's releasing so much pipelines WHEW
not bad, though I expected 6k+ you got 5.11?Quote:
Originally Posted by Dillusion
5.13Quote:
Originally Posted by sabrewolf732
hmmmm now lets speculate a bit about r580 and x1700 then :DQuote:
Originally Posted by sabrewolf732
x1300pro = 4x1, x1600xt = 4x3 right?
and a x1800xt is 16x1 and a 1300pro is 4x1 right?
we know that (more or less):
1800xt/r580=1300pro/1600XT
and can now guess that (more or less):
r580=?x1800xt performence wise
but since the 1600 vs the 1300 compare is complicated as they have the same amount of max threads, and have a similar design in some aspects but a different in others, like 2vs 5 texture units and 12gb/s vs 22gb/s bandwidth, lets not focus on this now as its complicated to compare the two and get a reliable result from it.
we also know other things wich make it way easier to speculate about r580 and 1700 performence :D
1300 vs 1800
core clock
1300pro is at 600
1800xt is at 625
memory bandwidth
1300 has 12gb/s
1800xt has 48gb/s
vertex processors
1300 has 2
1800 has 8
texture units
1300 has 4
1800 has 16
render back ends
1300 has 4
1800 has 16
zcompare
1300 has 4
1800 has 16
max threads
1300 has 128
1800 has 512
so the core clock is almost identical, the bandwidth/quads or pixel processors and all other aspects of the 1800 is exactly 4 times a 1300pro.
so a 1800xt is 400% a 1300 hardware and bandwidth wise.
now lets look at the performence:
2003
5853 vs 16359 1800xt=280%1300pro
2005
2786 vs 8929 1800xt=320%1300pro
riddick 10x7
33.2 vs 98.1 1800xt=295%1300pro
so a 1800 performs around 3x as good as a 1300 when it comes to raw performence.
we know that (more or less):
1. 1800=4x1300pro hardware wise
2. 1800=3x1300pro performence wise
3. 1700xt=2x1600xt hardware wise
4. r580=4x1600xt hardware wise/r580=2x1700xt hardware wise
and can now guess that (more or less):
1. 1700xt=1.5x1600xt performence wise
2. r580=3x1600xt performence wise/r580=2x1700xt performence wise
this only applies IF the 1700xt and r580 have the same relative bandwidth/quad and pixel pipeline and clockspeeds as the 1600xt, and the new way of organizing the pixel and vertex processors etc in the vpu scales the same from increasing the quads as it does from the 1300 to the 1800. the first one is unlikely but more about that later, the latter is very likely, and even if it scales differently it should scale similar so our guesstimations wouldnt be that far off.
1600xt vs 1700xt (speculation)
2003
9112 vs 13668
2005
5209 vs 7813
riddick 10x7
63.4 vs 95.1
1600xt vs r580 (speculation)
2003
9112 vs 27336
2005
5209 vs 15626
riddick 10x7
63.4 vs 190.2
to perform like this the 1700xt would need the following specs:
8x3 vpu design
24 pixel processors
10 vertex processors
8 texture units
8 render back ends
16 z compare units
256 max threads
590mhz core
44gb/s bandwidth =~1400mhz memory clocks
to perform like this the r580 would need the following specs:
16x3 vpu design
48 pixel processors
20 vertex processors
16 texture units
16 render back ends
32 z compare units
512 max threads
625 mhz core
88gb/s bandwidth =~2800mhz memory clocks
i doubt a mid end card will have 1400mhz memory on it, but its possible without making the card too expensive to be competitive, but 2800mhz memory wont be possible even with gddr4... maybe oced, but i highly doubt there will be 2800mhz memory. lets say they manage to get memory at around 2000mhz, wich is realistic if you ask me. that means they only get around 60gb/s bandwidth. i havent seen any results of how the x1000s perform with less bandwidth and how much it kills performence, but i dont think it will have a big impact. lets not forget that it will need 2x as much cpu power as well, so id say the 1700xt should be good for 12K in 2k3 and 7k in 2k5 wich is a little pessimistic, but i dont wanna create any false hopes, and hardware does, unfortunately :D, not scale linerily, so its realistic i think.
r580 should be good for 23k in 2k3 and 13k in 2k5
then again, the 1600 and 1300 and 1800 results i used are from a 2.4ghz a64, by the time the r580/1700xt are out they will be tested with a64 x2 and intel dual cores wich should offer more cpu power, plus ati might tweak the design some more and up the clockspeeds, so r580 might do 25K in 2k3 and 15K in 2k5... maybe thats even what ati set as a goal for themselves... who knows.
this is all speculation based on the assumption that r580 is 4x a 1600xt hardware wise and 1700xt is 2x a 1600xt hardware wise, wich is what everybody seems to say/believe.
very nice analysis. According to you, r580 is gonna pwn. :woot:
lol, your kidding right?Quote:
Originally Posted by bassv2
nvidia didnt get pwned, ati got pwned when nvidia released their 7800gtx 512mb
now lets see what atis next cards perform like and how nvidias answer to those will perform befor we jump to any conclusions ;)
and the 1k series were just dummies to see how the architecture works?
i dont think so. :D
sabrewolf732, what cpu speeds for your scores?
I think you mean dillusion ;) And the x1800xt is much closer to the 7800gtx with the new driver series. Also, seems to be about even in crossfire as well.Quote:
Originally Posted by saaya
of course, the more vpu power you add, aka sli/crossfire, the more the cpu becomes the limiting factor, and the difference between two cards that perform differently grows smaller and smaller ;)Quote:
Originally Posted by sabrewolf732
and yes, the 1800xt is on par with the 7800gtx, but the 7800gtx 512mb beats it silly :D its barely available :rolleyes: but its faster than the 1800xt. check out the review marcus made, comparing the 7800gtx 512mb and 1800xt voltmodded and with dry ice and even ln2 i think.
the 7800gts had a cold bug somehow and couldnt run below -20°C and he didnt have time to do a vmem mod, and still the 7800gtx beat the xt!
it got 12K and wasnt maxed out and the 1800xt only 11k and it was moxed out vmod and cooling wise...
so what cpu speed did you run that 1600 with?
dillusion what was the cpu speed
thats a pretty slow x1800xt for that kind of cooling :stick:Quote:
Originally Posted by saaya
Me. Maybe i'm just unlucky and have to deal with a lot of crappy sapphire stuff? :(Quote:
Originally Posted by Saaya
You guys might try comparing fillrate scores between the x1300 and x1600 series.. x1800 too if you can. There is a program called rendertree or something (forget exaclty, dont have it on hand) that tests some stuff like fillrate very well.. might be worth looking into.
actually, the x1800xt beats the gtx pretty easily and comes very close to the 512. Look at new benchies.Quote:
Originally Posted by saaya
my bad, the 1800xt was on air??? :eek:Quote:
Originally Posted by STEvil
hope so :SQuote:
Originally Posted by STEvil
hmmm why should it be interesting?Quote:
Originally Posted by STEvil
everyone's spouting this 4x3, 16x3 stuff like they forgot architectures still work in quads, not triads.
thats what i wondered about as well... so its wrong?Quote:
Originally Posted by Cybercat
gotta read the beyond3d stuff again :D
what it is is the four in 4x3 is the four fragment pipes in each quad. There are three of them.Quote:
Originally Posted by saaya
I think everyone took the claims from ATI that each pipe is three times as efficient, in tandem with the confusion about TMUs versus pipelines, to mean that each fragment is somehow really three fragments, or some messed up explaination like that. It's a big mess, and not a lot of people seem to really get it straight. You can't have 16 fragments in a single quad, since obviously a quad implies only four.
Let me try to clear this up right now. It's 12 shader fragments, or three quads, four texture mapping units, four ROPs. The texture units in the RV530 are sort of enhanced from those in the R520. Each texture mapping unit can serve three fragments at once. So it only takes four to handle all shader units in the RV530. Beyond3D talks about this as well.
For the R580, there are 48 shader fragments, or 12 quads, 16 texture mapping units, and 16 ROPs. Again, each texture mapping unit can address up to three fragments at once, so 16 covers all 48 shader units.
For the rumored RV560's configuration, there 24 shader fragments, which means there must be 8 TMUs to cover them all. The number of ROPs doesn't matter, since there's traditionally a crossbar between them all, but assumably they also work in quads, just like the shader units.
Your core clock should start improving with higher voltages...which should make 6k runs much easier.Quote:
Originally Posted by Dillusion
Perkam
then the voodoo 2 never existed, because it was 1x2 architecture!! omgz!!!Quote:
Originally Posted by Cybercat
:stick: :slap:
;)
They work however the manu builds them.
very funny, comparing a voodoo 2 to current architectures. While we're at it, let's take the original Radeon and Radeon 7000 series, with their 2x3 architecture!Quote:
Originally Posted by STEvil
Good call! :toast:Quote:
Originally Posted by Cybercat
I still think it is more accurate to say: 12 x 1/3 (only 4 texture units). Since when do the texture units get top billing :D , with a set trio of pixel pipes assigned to each?
That fits with the diagrams I was talking about earlier in this thread...shows 3 pipeline quads. Just because they are fragment pipes doesn't mean there aren't 12.Quote:
Originally Posted by Cybercat
270x9 = 2430mhzQuote:
Originally Posted by althes
anyone know how to raise the core voltage?
Anyone know which software to use for voltage adjustments?
hey saaya where you at?
Quote:
Originally Posted by Dillusion
yeap!
look into this thread
http://www.xtremesystems.org/forums/...ad.php?t=83562
enjoy!
and brake some 6k :D