I may try to mod with a pot.
I am going to post pics with out heatsink tomorrow.
Printable View
I may try to mod with a pot.
I am going to post pics with out heatsink tomorrow.
You aint kidding, its just sad that the mem can get 1.8ghz and barely 630 on the core what sort of card is this.Quote:
Originally Posted by saaya
If these cards were able to use voltage thru software they would really be a handful.
I a couple 47k resistors I am tempted to solder them on, and see what happens.
:D
I did the pencil mod and single card can reach 658/900 but in xfire limited to 630/873 not nice.
658 is still really bad althes, my cards can run that without any mods if i open the window and let some fresh air in....
are you sure the mod is working?
looks like its not if you ask me...
I think the problem is no power plug..
edit
what is power consumption of 256mb GDDR3 vs. 512mb/256mb GDDR2?
Depends on what speed you run them at.
edit:spelling
my badQuote:
Originally Posted by Welz
the difference is quite large, a 1800xl consumes 60W a 1800xt 110W under full loadQuote:
Originally Posted by STEvil
the cards have both the memory and core clocked at different speeds and at least vgpu is lower for the xl if not vmem as well. the speeds dont have a big impact on heat dissipation/power consumtpion, its the voltages that make the big difference afaik.
id guess that the power consumption of the xl vpu is 40W and the xt vpu is 60W
that means the memory of the xl consumes 20W, wich sounds about right imo, and the memory of the xt consumes around 50W, wich also sounds about right considering its double the memory and clocked at higher speeds.
so 256mb gddr3 consumes around 20W, maybe 25W when clocked to high speeds. no idea about 512mb gddr2 :shrug:
maybe stevil is right... one card shouldnt be a problem for the mainboard to power, but two cards... and it makes sense, when one card gets oced it ocs higher than when both cards are oced...
and all our cards seem to max out at 660 core when we use them alone, althes, you even vmodded them and still couldnt get higher than 660... maybe it has to do with the poer consumption of the card beeing more than the pciE slot can supply?
but i dont think so... the cards dont run that hot... at least min dont...
and theres an easy way to find out, if we lower the memory clock we should be able to reach 670 core.
if not then its def the limit of the core id say... or a limit of atitool maybe...
edit:
a 1600xt has a power consumtpion of 40W max, theres no way that ocing them without vmodding gets you to 60W, thats the max the pciE slot can power i think.
I thought a higher freq.= more work being done=more power consumption.Quote:
Originally Posted by saaya
Take a gpu, for instance. An increase in clock freq., even W/O a voltage increase, can increase temps. At the very least you'd be losing more to heat production at higher speeds.....wouldn't you? :shrug: A cpu / gpu / ram chip should draw more current the faster you run it...to a point.
Oh well, someone will correct what I've screwed up here.....but I always thought voltage increases were only indirectly responsible for temps...as they allow for higher current and freqs.
EDIT: On a similar topic; Contrary to popular belief, temp. control on CPUs / GPUs / ICs will not keep them safe regardless of voltages. To high a voltage can destroy the P/N junctions within a transistor.
its a popular belief that temp protections save peoples hardware from dieing from too much vcore/vgpu? :confused:
that sounds weird... :D
if you increase vcore/vgpu/vdd you get a bump in temps no matter what clockspeeds the ic is working at, and even in idle afaik.
increasing the speed the ic works with also increases the power consumption and heat output, yes, but not as much as increasing vcore/vgpu/vdd
at least in my exprience...
check your cpu temps at a low speed and then at stock or an oced speed under load, using stock vcore. then increase vcore and check the temps under load at the low and at the high clockspeed.
increasng vcore by 10% results in a much bigger power boost/heat boost as increasing the clockspeed by 10%!
the latter usually only increases the cpu temp by 1-2°C under load, if at all...
That is because if you increase Vgpu, you are also increasing the current and so, power consumption. Voltage is just a difference of potential.
Saaya, RE: the first item you quoted me on.... Many times in these and other forums, I've read that the overvolt given to a processor doesn't matter so much as long as its' temp is kept down.
I may have been getting a little OT....but I thought not too much...
consumption or dissipated as heat? Some are more efficient than others so they could be consuming more amps (therefore more wattage) but still put out less heat.Quote:
Originally Posted by saaya
Anyways, max PCI-E 16x power supply is 75w IIRC, but we both know thats a load of bull :fact:
Wow, These resuls are different...
Amazing how well they go in 3dmark05 compared to 3dmark03.
In 2003 I get 12147 with an AGP 6800nu, vmodded and unlocked/overclocked too, but only on an athlon XP http://service.futuremark.com/compare?2k3=4059557
In 05 however, I managed to puch 5201 marks out of the 6800nu agp
http://service.futuremark.com/compare?3dm05=949255
This shows how much better the new ATI series can do shader heavy stuff, but how mediocre they are in less intensive stuff
for cpus thats true in my expirince... for videocards and memory etc things are different :)Quote:
Originally Posted by DrJay
keping them cool alone doesnt mean they are safe no matter what voltages you bump through them :D
dont get what you mean, consumed power = dissipated heat... or where else should the consumed power go? ^^Quote:
Originally Posted by STEvil
99% of the power consumed in ics is dissipated as heat afaik.
and the numbers are from xbitlabs who modified a mainboard to meassure the power consumed through the pciE slot, wich turned out to be very accurate.
so 40W is power consumption under load.
and well... 75W for the pciE slot isnt bs i think... its just not 75W but a certain amount of W on each rail... theese 1600s seem to suck really hard on the 12v rail though... maybe the board cant keep the 12v rail in the pciE slots high enough?
hmmm and hence vgpu drops or gets unstable?
imma meassure the volts under load and idle on the cards.
Morgoth, i wouldnt say they are power full in intense stuff, they are just powerfull in different intense stuff, aka shaders.
the question that pope up though is how good is shader power if the card cant handle the geometry and texture load?
the 1600 shader power reminds me of the geforce fx5200, dx9 wow!, but it couldnt even run simple pixel shaders effciently...
same with the 1600s, sm3.0, wow, shader power, wow, but the cards are way too weak to run future games that heavily use pixel shaders and hdr...
even in crossfire...
so it makes more sense to buy a x850xt, wich is faster, and get a REAL sm3.0 card later when you really need sm3.0 and hdr capabilities.
but im not too sure, i wish i had a day of defeat copy so i could check hdr in that game, its the only game i know with hdr. sure, far cry has a hdr mod as well, but its buggy and more of a nice thing to play with than something you want to run the entire game.
Through to ground. T-bred A vs. T-bred B (1 layer on top of the core is the difference).Quote:
Originally Posted by saaya
dead wrongQuote:
99% of the power consumed in ics is dissipated as heat afaik.
Got a link to this? Personally i'd just set up a system with a PCI vid card and measure idle/load draw then add the PCI-E card and see what increased and how much under idle/load. Probably more accurate than anything they manageded.Quote:
and the numbers are from xbitlabs who modified a mainboard to meassure the power consumed through the pciE slot, wich turned out to be very accurate.
so 40W is power consumption under load.
Drooping voltage means extra heat generated at the connector or somewhere in the motherboard where the restriction is which means less clean power and lower clocks. I still say 75w is BS.. , ~30rms at best...Quote:
and well... 75W for the pciE slot isnt bs i think... its just not 75W but a certain amount of W on each rail... theese 1600s seem to suck really hard on the 12v rail though... maybe the board cant keep the 12v rail in the pciE slots high enough?
You will need to measure at the PCI-E slot or somewhere on the PCB of the card where it comes out of hte PCI-E slot. Should have my x1600xt in a couple weeks......Quote:
hmmm and hence vgpu drops or gets unstable?
imma meassure the volts under load and idle on the cards.
HL2 Lost Coast too. I've got all 3 and all i've got to say really is that HDR seems to be a riveboy gimmick so far :slapass: (as in we already get the affect when moving from a dark room to a light one as our eyes adjust to the brightness of our monitor.. hence why gaming in dark room then walk outside into the daylight sucks :toast: )Quote:
Morgoth, i wouldnt say they are power full in intense stuff, they are just powerfull in different intense stuff, aka shaders.
the question that pope up though is how good is shader power if the card cant handle the geometry and texture load?
the 1600 shader power reminds me of the geforce fx5200, dx9 wow!, but it couldnt even run simple pixel shaders effciently...
same with the 1600s, sm3.0, wow, shader power, wow, but the cards are way too weak to run future games that heavily use pixel shaders and hdr...
even in crossfire...
so it makes more sense to buy a x850xt, wich is faster, and get a REAL sm3.0 card later when you really need sm3.0 and hdr capabilities.
but im not too sure, i wish i had a day of defeat copy so i could check hdr in that game, its the only game i know with hdr. sure, far cry has a hdr mod as well, but its buggy and more of a nice thing to play with than something you want to run the entire game.
dont get what you mean...Quote:
Originally Posted by STEvil
then where does the power go?Quote:
Originally Posted by STEvil
i think those guys know what they are doing...Quote:
Originally Posted by STEvil
and nope, because then youd meassure the efficiency ratio of your psu as well, wich varies from psu to psu and ambient temp and whether you have a 50hz or 60hz outlet or 120v or 220v etc...
no big impact, but this guy knows what hes doing and wanted to get as close as possible to the real numbers :D
http://www.xbitlabs.com/articles/vid...-x1000_14.html
no idea what your talking about... explain :DQuote:
Originally Posted by STEvil
yepp, thats what i was going to do, plus check vgpu under load and idle on the back of the card, vmem as well i guess while im at it. and in a couple of weeks? by then the 1700 should be out :DQuote:
Originally Posted by STEvil
xactly... and well, lost cost and the far cry stuf are more like mods/patchesdemos, dad source is the only thing id call a hdr game :DQuote:
Originally Posted by STEvil
and even thats argueable ^^
im using a fortron 350W (400W) psu, so dont be surprised by the dipping 12v rail wth both cards under load ^^
this is an old psu with low watt rating, but its a quality psu, fortron btp series.
psu rails:
12v 12.10 idle 12.00 load
5v 5.22 idle 5.23 load
3.3v 3.38 idle 3.38 load
rails meassured on the mainboard:
12v 12.10 idle 11.96 load
5v 5.22 idle 5.23 load
3.3v 3.38 idle 3.38 load
rails measured on the pciE slot: (2nd from right when having the mainboard layflat and looking at the back of the vidcard. this and the following pins are 12v, then a couple of ground rails and then the 3.3v rail)
12v 12.06 idle 11.93 load
3.3v 3.36 idle 3.36 load
videocard volts (massured on the cap legs on the back of the card, 2 caps close to the videocard fan plug are vgpu, the other cap on the edge of the card is vmem)
vgpu 1.43 idle 1.45 load
vmem 2.10 idle 2.12 load
11.93 sounds low, the board is eating .4v from the 12v rail...
i will bump the 12v rail and see if it helps to get a higher oc... but i doubt it
Through to ground. If there is no curcuit then there is no place for the power to go at all and no reason for it to be where it is in the first place.Quote:
Originally Posted by saaya
Really skeptical after having read that.. it seems like they are saying the x1800xt (for example) draws all of its power through the PCI-E slot but pulling 112.2w (as measured by them) is just not going to happen. Cant imagine what their numbers would have been with it overclocked...Quote:
i think those guys know what they are doing...
This is why you get a mean power first, then you measure. Of course there will be a 2-5% offset due to efficiency, but we dont need 100% accuracy. If we wanted 100% accuracy the only way to go about this would be to connect power meters inline with the ATX header and any power adaptors for the video cards (also any molexes that plug into the motherboard).Quote:
and nope, because then youd meassure the efficiency ratio of your psu as well, wich varies from psu to psu and ambient temp and whether you have a 50hz or 60hz outlet or 120v or 220v etc...
I dont think his numbers are right if they are measuring the power consumption of the PCI-E slot (as mentioned above).Quote:
no big impact, but this guy knows what hes doing and wanted to get as close as possible to the real numbers :D
http://www.xbitlabs.com/articles/vid...-x1000_14.html
when volts/amps encounter resistance heat is generated.Quote:
no idea what your talking about... explain :D
I intend to retest HDR with my x1600xt vs. the minimal HDR affects my 9700pro gives.. maybe i'm missing something, but its just useless and compounds an effect we already percieve. Same for motion blur in some games (namely need for speed or Day of Defeat: Source when you are near a grenade explosion).Quote:
yepp, thats what i was going to do, plus check vgpu under load and idle on the back of the card, vmem as well i guess while im at it. and in a couple of weeks? by then the 1700 should be out :D
xactly... and well, lost cost and the far cry stuf are more like mods/patchesdemos, dad source is the only thing id call a hdr game :D
and even thats argueable ^^
.04v droop isnt much. The droop will probably happen after the connector. Also, the power reguators might be insufficient much like the problem many 9600 series cards had when trying to OC both core and mem.Quote:
11.93 sounds low, the board is eating .4v from the 12v rail...
i will bump the 12v rail and see if it helps to get a higher oc... but i doubt it
I will do the mods today see where I get
correct me if im wrong but dont all sites and people meassure the power consumption of the hardware, and not how much power flew through it?Quote:
Originally Posted by STEvil
only the differential between what flew in and what flew out gets measured afaik. so the 40W is what was brought to the card and didnt leave it, and the only way it can leave the card would be to ionize the air around it, wich would be 0.0000w id guess :D
or through heat.
where do they say the card draws all the power through the board?Quote:
Originally Posted by STEvil
as i said, they wanted to get as close to the 100% accuracy as possible, why would you go for something less accurate if its possible to get better results without a big effort?Quote:
Originally Posted by STEvil
if you have a question about how they did it just email them, they always replied to my emails so far...
dude, right n the very first paragraph of the page of the article i linked you to:Quote:
Originally Posted by STEvil
READ dude, READ!Quote:
To measure how much power the graphics accelerator consumes through the external connector, we used an adapter equipped with special shunt and connectors.
O rly? :PQuote:
Originally Posted by STEvil
still dont get what you mean with
please explainQuote:
Drooping voltage means extra heat generated at the connector or somewhere in the motherboard where the restriction is which means less clean power and lower clocks. I still say 75w is BS.. , ~30rms at best...
yeah, i tested the nw far cry patch wich was released 3 days ago... even at the lowest settings the blending is so strong that some parts of the level become so bright you can barely see anything, and the door buttons and laptop monitors etc that glow get so bright you cant see anything or read what the buttons say :stick:Quote:
Originally Posted by STEvil
adn yeah, this could be achieved with sm2.0 hadware without any problems afaik...
expect a x700 performence wise when you get your 1600 and bench it, otherwise you will be dissapointed :D
its a bit faster than a x700 at stock, quite a bit in some situations, but its not a x800 level card...
its .13 droop wich shows my psu cant keep up on the 12v rail.Quote:
Originally Posted by STEvil
correct me if im wrong, bu droop means the voltage drops because the draw is so big that the circuit becomes less efficient hence the voltage drops.
the .4v is just the circuit resistance, wich is imo pretty large.
for vdim its usually .2 and on this board theres a molex plug 1cm above the first pciE slot... so its kinda weird the traces have such a high resistance...
im discussing this with some guys on 3dcenter.org and there are 2 guys who think the cards are rather fillrate/tmu limited, wich makes sense...
damien, do you cards still scale well when ocing the memory from... lets say 1.7 to 1.8ghz?
how many points in 2k3 and 2k5?
nice usage :DQuote:
Originally Posted by saaya
The gain isn't really that much.
So I can see they are fill rate limited.
If the crappy cores would oc worth a damn these cards might actually be worth it.
I tried the mods and boom.
Cards are being sent back.
Seems in xfire the boards may be pulling too much powere from the slots so the board cant give it too them.
ALso my 12v is being sucked dry and the ocz520 can give some power.
I have a x800pro that I am going to test on this board and see how the benches.
heehe i saw the year of the owl flash at ytmnd and have been waiting for a chance to use it somehow ever since ^^Quote:
Originally Posted by vapb400
the cards blew up?
somebody on 3dcenter.org said there was a 630 lock on the cards... maybe a bios lock? wouldnt be the first time ati did this...
and extra bandwidth doesnt help?
hmmmm
this all explains why theese cards are so useless in cf... fillrate doesnt scale with cf and sli afaik, so the fillrate limit gets even worse with cards in cf... the only thing that gets a boost is shader power i think, and thats something theese cards are good in, so... 1600s in cf doesnt make any sense... makes me wonder why ati went for so much shader power and so little tmu power... maybe tmus are expensive ie cost a lot of transistors?
but rv530 is almost half the size of r520!
so whats taking so much die space?
the shader units?
hmmmmm maybe its the threadding engine!
thats a part they cant remove from the design without chaning the whole concept of the architecture, efficient use of the resources...
but still, they should have capped some shader power and invested those transistors in tmu power i guess...
too bad really...
and whats scaring me is that r580 is rumored to have only 16 tmus, the same amount of tmus r520 has!
if thats true then it will suffer from the same problems as x1600s, a fillrate bottleneck! :S
http://img468.imageshack.us/img468/1...neseowl3hn.jpg
that one is better :banana:
i had a similar one as a desktop background for some time, together with the "derka derka" one and the "o vreimant?" one ^^