PDA

View Full Version : The performance per clock formula for SPi?



K404
12-03-2007, 08:44 AM
Please can someone share it? I would PM, but not sure who to ask.

Cheers!

K

massman
12-03-2007, 09:31 AM
Do you mean: Time 1M X cpu clock frequency ?

K404
12-03-2007, 09:45 AM
Do you mean: Time 1M X cpu clock frequency ?

I didnt know it was that simple :rolleyes: Cheers :)

1Day
12-03-2007, 11:13 AM
So a higher number is better? Or what?

BeardyMan
12-03-2007, 11:23 AM
So a higher number is better? Or what?

an example

according to Vapor,

9.00secs at 5500mhz = 49500
8.91secs at 5500mhz = 49005

the lower the number the better the effeciency,

cons
- who are we to decide what can be done?
- setups needs according to them, same multiplier, fsb etc etc
- the so called number changes with new hw.

I don't trust it, i don't like it and i will not use it to determine wheter a score is in line or not. What others do is there case and their pov and i will respect that.

massman
12-03-2007, 11:32 AM
That number gives an overall performance ratio which is quite exact when dealing with 95% of the scores, however, since this is XtremeSystems - we're running our systems as optimised as possible - we all know that there are more variables than just the cpu clock speed.

Note that lowering the performance number is more difficult when having a higher clocked CPU. So you have to compare performance ratios with equally clocked systems when determingwhat system is optimised the best.

Gautam
12-03-2007, 11:35 AM
I actually have wanted to explain this in some depth. I only ask that you keep an open mind and follow me carefully.

First some disclaimers:

-A performance product by itself is not enough to make a judgment on whether a SuperPi time is dubious or not

-Contrary to what some people seem to believe, there is no "absolute perfect" performance product number, nor has anyone said there was. They improve with advances in technology, tweaks, etc

-A performance product can certainly be used as a reference to compare the efficiencies of two times, however there are many things that need to be kept in mind before doing so

-Strictly speaking this is no formula. It is simply a relationship between CPU speed and SuperPi time that will always hold. How you use it is up to you.

As massman stated, the relationship is simply CPU speed * SuperPi time. The theory behind this is that SuperPi times scale inversely proportionally to CPU speed. There is no performance per clock formula, since it isn't a linear relationship. (A linear relationship would be one where performance increases proportionally to speed, i.e., if 3000MHz scored 20 seconds, 6000MHz would score 10 seconds. This is not the case with SuperPi.)

We can extend this further to say that CPU speed * SuperPi time = constant as long as hold everything constant besides FSB. Using this relationship, you can actually predict what CPU speed you need to hit a certain time. (This is one of the tricks of the trade if you're doing the 32M low clock challenge) I invite everyone to try this. Pick any array of settings. Run SuperPi and multiply the time by your CPU speed and note down the result. Now, fire Clockgen or SetFSB up, bump the FSB up by several ticks, close it and run SuperPi again. Multiply this new SuperPi time by your new CPU speed, and you should see that the result is almost identical to the number you noted down previously.

This result can be used to guess what time you'd get a certain CPU speed.

For example, if you tried running 9*400 1:1 4-4-4 and scored 14.109, and then reran at 9*450 1:1 4-4-4 and scored 12.547, you could make an educated guess that at 9*500 1:1 4-4-4, you would score about 11.29.

12.547 * 9*450 = 50813
50813/(9*500) = 11.293

You can also try and figure out what CPU speed it'll take you to hit 10.000 secs.

Simply,

50813/10.000 = 5081.3MHz, or 564.5*9 1:1 4-4-4

Now, as you'd intuitively feel, if you shaved your timings down or added tweaks, it'll take you a lower CPU speed to hit 10.000.

Let's say you tighten the memory down to 4-4-3 and can now hit 10.000 at 5060MHz.

Then 5060MHz * 10.000 sec = 50600, or a lowered performance product. This is why lower is generally better.

Notice how I am only comparing 9x 1:1 4-4-4 all across the board. This is the only way you can use it to get precision in this manner.

Soak that up...gotta go to class now, but I'll have more to add.

andre X_X
12-03-2007, 11:54 AM
well explained :up:

G H Z
12-03-2007, 11:54 AM
I don't trust it

Statistics doesn't mean anything to you? It gives you a simple & accurate way to compare scores and their efficiencies. Of course there is a difference between 45nm and 65nm, thats not news. Thats why you don't compare 45nm with 65nm.

KTE
12-03-2007, 12:07 PM
Hmm... it's very basic science. CPU MHz x Time Taken = Clock Cycles taken to complete benchmark.

That's where all this stems from. Processor Engineering. If you know one, then ask them they'll know this quite easily.

Because a CPU works in Hertz which is cycles per second so you are computing how many clock cycles a CPU takes to compute a fixed benchmark (1M).

So for a benchmark that takes 40 seconds at 2GHz you multiply 40*2E9 and get 8E10 cycles. That's how many cycles the processsor took to complete the benchmark.

It's main purpose in the industry is to compare different processor and their efficiency.

Enoc
12-03-2007, 12:16 PM
from Vapor
Well, SPi doesn't scale linearly, it's inversely proportional. You can't linearly generalize: 100MHz increase is equal to a given time decrease. However, you can say this: MHz * time ~= X, where X is constant for any given setup.

For a high speed, VERY well tweaked 975X + Conroe sytem, X is equal to about 50500. Remember this.


this is what i said...guess you missed it K404 over at...;)

the formula is based on results of 4,200mhz and up taking the parameter of the performance of 975x chipset., the formula was never based on 3,600mhz runs and up ;)

here at XS are 5-8 that break <50,000 ...from the lowest eva2000 at 3599hz to the highest that is Learn, Gautam and others at 3608mhz.
and Chris run of 13.781 at 3603 that yields a PP 49,652,wich from Vapors formula or stadistic is, guess what? imposible..but no one made a big deal out of it...:rolleyes: ,just let's leave like that...

for the formula to be right he should use the parameter of performance of each board...wich we know that X38>P35> 965rev2(+-)= 975x, clock per clock performance and you go to sum OS and tweaks...some use fresh intalls with out taking anything out and other stripped Os, when it comes to tweaks ,everybody has their own version,registry...and for example copywazza, there are more then 10 variations...so how can you generalize a formula when there is a lot of variables that you don't take into account?

well tweaked is not the best description of what it takes into account...

you can't judge all the others motherboards PP just based on the 975 chipset, you got to revised the formula each time there is a new chipset,that has better clock per clock performance, not only when there is a new transition of technology (65nm to 45nm).

my thoughts...

massman
12-03-2007, 12:50 PM
here at XS are 5-8 that break <50,000 ...from the lowest eva2000 at 3599hz to the highest that is Learn, Gautam and others at 3608mhz.
and Chris run of 13.781 at 3603 that yields a PP 49,652,wich from Vapors formula or stadistic is, guess what? imposible..but no one made a big deal out of it...:rolleyes: ,just let's leave like that...



Note that lowering the performance number is more difficult when having a higher clocked CPU. So you have to compare performance ratios with equally clocked systems when determing what system is optimised the best.

Breaking the 50K performance ratio at 3600MHz is not impossible, just very good tweaking; Breaking 50K performance ratio at 4500MHz is close to impossible and only few have done it.

BeardyMan
12-03-2007, 12:55 PM
Breaking the 50K performance ratio at 3600MHz is not impossible, just very good tweaking; Breaking 50K performance ratio at 4500MHz is close to impossible and only few have done it.

then it's not impossible or close to.:D

DDR3 can trash the whole principal as well ;)

massman
12-03-2007, 01:13 PM
Compare DDR3 and DDR2 performance ratios, please, and report back.

KTE
12-03-2007, 01:41 PM
Breaking the 50K performance ratio at 3600MHz is not impossible, just very good tweaking; Breaking 50K performance ratio at 4500MHz is close to impossible and only few have done it.When you compare processor efficiency with one another you have to keep everything else fixed and not a variable -> or you cannot ascribe it as "processor efficiency".

Processor efficiency simply means if Person A has CPU X with the EXACT same OS/hardware/settings (every single one of them -> they share everything) as Person B but while Person B uses the same hardware/OS he changes to his own CPU Y which is the same model/frequency/settings as CPU X of Person A.

When he runs the exact same bench and method, every single tid bit the same, he ends up with a faster time for CPU Y than CPU X.

That can nly be ascribed to processor efficiency. The rest cannot be and usually never is because processors have limits to their peak theoretical performance. The only way you can increase the known efficiency is if the efficiency most people were getting was not 100% of the physical possible and so you edged a little more % closer to 100% efficiency and beat others. Otherwise it's scienitfically impossible like you coming out of a black hole right now. ;)

Whether someone beats a time and he's called a cheat and so on, is usually up to the masses to lobby against. If they like and know the person well, it'll be accepted and praised and if not, it won't be and the person slandered instead. None of that can prove or disprove whether someone cheated or whether which results are correct. The only way to do that is if you asked the processor manufacturer to calculate for you the maximum theoretical potential of CPU X in a benchmark at settings Y. Then you can compare to that and know whose time is more efficient.

Did you see my Celeron D 1M time after CDT? That's physically impossible by the CPU alone to get but it wasn't the CPU which made that time quicker, so it's becomes possible. My P4 3GHz 1MB L2 which is usually at least 14 seconds ahead in 1M is around 9 seconds slower than that CDT Celeron time. :yepp:

massman
12-03-2007, 01:54 PM
When comparing two systems which ran superpi 1M, you have two options:

1) You're interested in which system is the faster.

=> You can only compare the efficiency when keeping all variables constant. The difference between both systems will be very, very small.

2) You're interested in who has the best optimisted system in combination with the faster hardware setup.

=> The efficiency "formula" gives a good image of what the faster setup is. based on small differences, you can't make any decision in whether one system is slower or not. Though, you CAN make a conclusion based on 53k vs 47k efficiency. The first one will be a (very) bas tweaked system, the second one is impossible and cheated/faked.

Once again, you need to compare with systems that have the same (or close to) clock frequency. Then you're going to be able to see what effect DDR2/3 has, what the effect of certain OS tweaks is.

The performance effeciency does NOT have to be determined by the manufacturer at all as the benchmark community has acces to an insane amount of results to test the formula. It works, that's for sure, but it's not a way to determine if someone is a cheater when dealing with very small error margins. Like you said, KTE, callind someone a cheater has more to do with human's nature to dislike anything/anyone that is better.

G H Z
12-03-2007, 03:18 PM
When you compare processor efficiency with one another you have to keep everything else fixed and not a variable -> or you cannot ascribe it as "processor efficiency".

Processor efficiency simply means if Person A has CPU X with the EXACT same OS/hardware/settings (every single one of them -> they share everything) as Person B but while Person B uses the same hardware/OS he changes to his own CPU Y which is the same model/frequency/settings as CPU X of Person A.

When he runs the exact same bench and method, every single tid bit the same, he ends up with a faster time for CPU Y than CPU X.

That can nly be ascribed to processor efficiency. The rest cannot be and usually never is because processors have limits to their peak theoretical performance.

Just one thing to note KTE, pp does not strictly measure processor efficiency - it's quite simply a measurement of a given systems 1M or 32M performance only. Memory performance, software, chipset, drivers, tweaks and benching method are all in the mix. The real value (and accuracy) of pp is seen as the number of results increases. Given enough, pp's paints a very clear picture of below average, typical or very efficient system performance at the hands of an experienced bencher. It's actually a great way to gauge performance as your benching, take your memory setup and sub-timings for instance.

Thus when you have results that fall outside of all the previous results by a large margin, they are quite naturally going to be questioned.

Gautam
12-03-2007, 11:01 PM
Okay, now to deal with the topic of comparing performance products across different configurations.

As I stated above, for a given configuration, you'll have a constant pp as you scale FSB up.

For now once again I'm going to always assume 1:1 and a constant set of timings. Holding that, another observation can be made. Lowering the multiplier will also lower the performance product. As the memory bandwidth grows in proportion to the CPU speed (which is essentially what the multiplier is a ratio of), so does the efficiency. 8x400 1:1 4-4-4 will have a lower performance product than 9x400 1:1 4-4-4. Based on that, another very intuitive extension can be made. 8x450 1:1 4-4-4 will have the same performance product as 8x400 1:1 4-4-4. Therefore, 8x450 1:1 4-4-4 will have a lower performance product than 9x400 1:1 4-4-4 holding everything else constant.

This is why you can use the pp to gauge "efficiency" at a constant CPU speed. 8x450 1:1 4-4-4 is a more efficient configuration than 9x400 1:1 4-4-4.

This segways into an issue that's been brought up often. Why, if people are doing ~50000 at 3.6GHz, is there anything odd about it also being done at 5.5GHz? Let's say that at 450x8 1:1 4-4-4, you score 13.906, resulting a performance product of 50062.

Quite feasible.

However it should be quite apparent that such a scenario would not be possible at 5.5GHz. As I've stated, the performance product remains constant for a given multiplier and memory configuration.

This means that in order to score the same 50062 pp at 5.5GHz, everything must be held constant with the exception of the front side bus. We must stay at 8x, 1:1 4-4-4. This would result in a front side bus of 687.5MHz. If one can do 687.5*8 1:1 4-4-4, then 50062 would theoretically possible at 5.5GHz. However, quite obviously that is, for all practical purposes, beyond the realm of possibility.

This is why, at higher CPU speeds, performance products generally increase. The multiplier must be increased, so there is proportionally less memory bandwidth driving the CPU speed. Efficiency decreases.

Once again, there is no formula. The "optimal" pp as Vapor stated it then is purely empirical and only based on the results people put out. It is also a subjective measure, and is an opinion so to speak. The pp itself is concrete. It's up to you to decide what a "good" pp is. Vapor made a very educated decision based on analyzing over 200 hwbot results all relatively close to 5 GHz. That is all. Naturally as chipsets get better, and faster RAM comes out, we see lower performance products. That has little to do with the concept of SuperPi scaling.

G H Z and massman made good points. I would just like to clear things up for everyone so that they have a better understanding in case they wish to debate. Right now there are some misunderstandings.

dinos22
12-03-2007, 11:35 PM
Breaking the 50K performance ratio at 3600MHz is not impossible, just very good tweaking; Breaking 50K performance ratio at 4500MHz is close to impossible and only few have done it.

how about 45K at 5GHz :D ;)

the problem is that you cannot compare relative performance with these numbers and when you guys start quoting them in threads being debated many don't really understand what you are talking about and treat them as relative values - bad news

it's a healthy discussion and hopefully the thread can be linked in future when these things are debated BUT people discussing it here need to qualify what the numbers apply to specifically as sometimes we dig these things up next year and so on and it stops making sense as i showed with my first line above as i was quoting an untweaked yorkfield and DDR3 system

T_M
12-03-2007, 11:54 PM
The real value (and accuracy) of pp is seen as the number of results increases. Given enough, pp's paints a very clear picture of below average, typical or very efficient system performance at the hands of an experienced bencher. It's actually a great way to gauge performance as your benching, take your memory setup and sub-timings for instance.

Thus when you have results that fall outside of all the previous results by a large margin, they are quite naturally going to be questioned.

This couldn't have been said better, and is the very point of performance product use.
My understanding is that pp is most useful in identifying the red points as below, which are simply far out of the norm which means either something completely new, or completely wrong. Nobody is saying that pp will give you exact numbers, but when 5000 results all fall statistically in the same distribution and then 1 or 2 are way out those ones should be looked at more thouroughly.

http://img.photobucket.com/albums/v310/T_M/untitled-1.jpg

KTE
12-04-2007, 12:14 AM
I've read explanations of this before and understand what is being used quite well, thanks to the explainers. What I was saying is what GHZ/Gautam mentioned in their posts (non-verbatim): "our pp data is gathered by many users testing" -> thus an estimate at best.
There are some v.good points that I agree with, because they are logical. But there's still a few that don't make mathematical or scientific sense to me because they are conclusions based on "assumption" -> postulates. With so much user testing, much of this will be correct and the accuracy of "pp values" will increase with larger data base but this could very easily be wrong in some cases which are not reproduced or achievable by many, so again "it's all relative and not absolute". These cases are especially of importance where you have "the fastest times being compared".

One such assumption is "x time is the fastest possible at y configuration". When you haven't worked out the theoretical maximum of a given architecture, that's impossible to conclude. You can have hints at best and even I can guess. I understand processor architecture and also that inside a processor you have hundreds of possible bottlenecks. A little more efficiency in the way the prefetch algorithm bringing datum into the L1 cache works can make your time slghtly faster (all things kept constant) let alone anything else. The converse is true too.

So what's the current understanding of how efficient 3535.1...3oh6....12.59.953..7x505 ...1010mhz...8-7-6-18....1/2...P35 (http://www.xtremesystems.org/forums/showpost.php?p=2493374&postcount=1214) time is in terms of percentages?

100%?

Can anyone else replicate that time?

Meaning, does anyone of the competitors believe it can be beat?

What about beat by 0.4s at the same config? Do you believe this is possible?

Because if you deem ithat time 100% efficient (based on pp values) for that config, then anything faster you will never believe.
If you deem it less than 100% possible efficiency, then it can be beat in your books.

As a side note, can anyone tell me how you know for sure who ran which time at which clock speeds and settings?

T_M
12-04-2007, 12:29 AM
Is there a pp for 32M?

KTE
12-04-2007, 12:36 AM
Yes. As I mentioned, this is basic computing architecture calculations. Anything which works out cycles a processor takes to complete a job needs two values (time taken in seconds to complete and processor speed).

Seconds x MHz = PP

Where:

MHz = Multi x FSB

Whoever used this for SuperPi just took the well known mathematical formula from computing. The PP is nothing but the number of cycles taken by the processor to complete the task.

1Day
12-04-2007, 02:29 AM
Wow - thanks to all who posted here it has been really interesting and most informative. :up:

Gautam and others thanks for taking the time to make clear what you mean and what your understanding of this perf number is.:clap: :clap:

K404
12-04-2007, 06:04 AM
KTE: The formula you state is right, but used completely out of context here.

SPi does not rely solely on CPU MHz. If it was, there would be 1 single time for a given clockspeed.

KTE
12-04-2007, 06:39 AM
KTE: The formula you state is right, but used completely out of context here.Did you read what GHZ and Gautam mentioned? That's the formula they use. It's not out of context at all and I know this from Penrose who was one of my professors for 2 years. They used it to analyze CPU efficiency when comparing one CPU to another at the Uni. If you understand what I'm saying, then it'll be quite clear: works out processor cycles taken to complete a fixed benchmark.

That will always stay the same for a given processor as long as you keep every single other thing constant. It can be used in reverse to find out what clock speed someone ran the benchmark at as long as you know the time taken.

That's what these guys are working out but giving it a different name (PP) and meaning to (etc).

But it has flaws like you said: the processor is not the only thing in the equation here. You have dozens of variables which can effect SPi time.

SPi does not rely solely on CPU MHz. If it was, there would be 1 single time for a given clockspeed.That's exactly what I've been saying on why the formula is not accurate to find a cheater from non-cheater. ;)

massman
12-04-2007, 06:58 AM
But it has flaws like you said: the processor is not the only thing in the equation here. You have dozens of variables which can effect SPi time.
That's exactly what I've been saying on why the formula is not accurate to find a cheater from non-cheater. ;)

It gives an overall image, you can easily spot cheaters if the PP is too low.

massman
12-04-2007, 06:58 AM
how about 45K at 5GHz :D ;)

With the new yorkies, close to 45k is quite good :up:

KTE
12-04-2007, 10:05 AM
Yep, it's an estimate. Can only work for scores very high up but not always.

Meaning: any person can use it to work out a combination to get a specific PP and then run higher clocks, show that time with clockgen/setfsb down but actually have run it at faster clocks, so their own PB might have been much higher (slower time). Badly loopholed. If I switch on my Intel system soon I'll show you what I mean. :shakes:

Gautam
12-04-2007, 05:29 PM
Whether someone beats a time and he's called a cheat and so on, is usually up to the masses to lobby against. If they like and know the person well, it'll be accepted and praised and if not, it won't be and the person slandered instead.

Indeed. No arbitrary number is needed to see the issue at hand...one look at the graph T_M posted is enough to see where the issue is. But I'd rather we steer away from that.

Btw, for OPB, while the Gregory-Leibniz series does converge to Pi/4, SuperPi does not use it to calculate Pi. It uses the Gauss-Legendre (http://en.wikipedia.org/wiki/Gauss-Legendre_algorithm) algorithm, which is much more efficient in practice.

tiborrr
12-04-2007, 05:40 PM
With the new yorkies, close to 45k is quite good :up:
It's hard to compare well tweaked run done at performance level (e.g. FSB:DRAM strap) 5 and one done at PL 7... Some memory dividers are just bugged and cannot work as low as others.

Imho the performance ratio is the only certain indicator of an 'healthy' run when knowing the performance level ran at.

dduckquack
12-04-2007, 07:31 PM
predictive at most, nevertheless, helpful. thanks for the explaination gautam

massman
12-04-2007, 11:29 PM
It's hard to compare well tweaked run done at performance level (e.g. FSB:DRAM strap) 5 and one done at PL 7... Some memory dividers are just bugged and cannot work as low as others.

Imho the performance ratio is the only certain indicator of an 'healthy' run when knowing the performance level ran at.

That doesn't change the fact that 45K is very good. Try it, you will have trouble to beat it :)

CapFTP
12-05-2007, 01:23 AM
tuned....i started being interested in this topic after I ran this pi and it seems it's quite a good efficiency....

what do you think about it ?

http://news.tecnocomputer.it/stuff/File/Capftp/QX9650/8.656.JPG

this means 8.656x5220=45184 ?

i ask because if you take a look in HWbot seems quite good related to neighbourhood

http://www.hwbot.org/hallOfFame.do?type=result&applicationId=3

Rol-Co
12-05-2007, 01:54 AM
but can be better.... :D

http://rol-co.nl/got/york4ghz.jpg

Efficiency Asus Maximus Formula.

CapFTP
12-05-2007, 03:07 AM
yes , fine....

could it be a trend for highest clocks ?

Rol-Co
12-05-2007, 03:19 AM
just not to compare to 5ghz+ , higher clocks you loose some efficiency, it always was...

dinos22
12-05-2007, 03:39 AM
just not to compare to 5ghz+ , higher clocks you loose some efficiency, it always was...

r'ly?

not tweaked at all

ram nowhere near maxed out

http://img80.imageshack.us/img80/1119/5ghz1msuperpi9031sok0.png

this would easily be below 45K with maxmem and faster RAM or even 6-5-5-x timings :)

massman
12-05-2007, 06:54 AM
Performance Product of the Yorksfield cpu's

Dino, would you please provide me a run with a PP < 45k ?

Rol-Co
12-05-2007, 08:59 AM
Dinos22
i acualy ment when you maxed out on cpu ....on 1 core , look at top score's hwbot,nobody came close to 45k round.

nice eff for ddr3 btw...do you think x38 is faster with ddr3 instead of p35?

dinos22
12-05-2007, 06:26 PM
Performance Product of the Yorksfield cpu's

Dino, would you please provide me a run with a PP < 45k ?

i haven't got a yorkfield any more

just waiting for E8500 chips to show up so we can start getting into dual core superpi action

but as i've shown going below 45K will not be a problem even at 5GHz+

44K is not possible for me but that doesn't mean it is not possible for others (or i should say not possible with me with the current tweaks or lack of them)

with the CDT tweak working the way OPB demonstrated in 1M 44K flat looks possible. The only problem is OPB isn't explaining it properly or not showing all the info even to closed forum sections as i have not seen a single person repeat 1M performance at his levels :(

i haven't made any major ground on normal coppy wazza compared to CDT tweak instructions either :confused:

dinos22
12-05-2007, 06:29 PM
Dinos22
i acualy ment when you maxed out on cpu ....on 1 core , look at top score's hwbot,nobody came close to 45k round.

nice eff for ddr3 btw...do you think x38 is faster with ddr3 instead of p35?

you are right about efficiency dropping off and DDR3 helps in any case

however i feel with the RAM timings hipro had on his run he should be running more efficiently (maybe it's the mobo :p: :D)

for those that don't know he had DDR3 running at 1800MHz 6-6-5-x and super tight subtimings with 58xxMHz

[content removed due to thread cleaning] - STEvil

massman
12-06-2007, 02:09 AM
but as i've shown going below 45K will not be a problem even at 5GHz+

I'm not convinced :), sub 45k is a problem ;)

Gautam
12-06-2007, 06:12 PM
Nah, I think its best we not discuss it all anymore, period. I guess I was more exasperated than anything. This thread should deal simply with performance product calculations. I would prefer seeing it cleaned myself, but that's not up to me.

STEvil
12-06-2007, 06:30 PM
I agree, cleaning would be nice.. so i'm going to lock it for a few minutes and clean it up some.

edit

Thread cleaned, where you guys go from here is your decision.

Rol-Co
12-06-2007, 07:21 PM
I'm not convinced :), sub 45k is a problem ;)

Closest i could get ....Asus Maximus+ddr2 :(
http://82.173.172.10/Rol-Co/4ghz-pi-air-eff.JPG
45016

used eram/realtime/memset only
win xp sp2
With a bit luck though i must say, most of the runs are 11.265.. :D

T_M
12-06-2007, 08:01 PM
Im still floating around in the mid 46k's, but that was rushed testing on unoptimsed system

dinos22
12-06-2007, 08:20 PM
Closest i could get ....Asus Maximus+ddr2 :(
[IMG]http://82.173.172.10/Rol-Co/4ghz-pi-air-eff.JPG
45016

used eram/realtime/memset only
win xp sp2
With a bit luck though i must say, most of the runs are 11.265.. :D

eram doesn't work that well for me for 1M

why not try maxmem instead

KTE
12-06-2007, 08:55 PM
Thanks for cleaning the thread. Makes very good sense. :)

Let's wait for some Wofldale testing.

So what's the "maximum PP" or "efficiency" as some of you call it for a 450x8 1:1 4-4-4-4 2T on C2D?

dinos22
12-06-2007, 09:16 PM
it cannot be predicted man
fastest times around 3.6Ghz are around 13.8xx which is below 50K

KTE
12-06-2007, 09:40 PM
Thanks dinos. ;)

T_M
12-06-2007, 09:44 PM
yeah i though it was in the 49k's

Rol-Co
12-07-2007, 02:14 AM
eram doesn't work that well for me for 1M

why not try maxmem instead

maxmem doesn't make any difference, it never did overhere. :(

Johnny Bravo
12-07-2007, 02:34 AM
My findings on the matter

http://www.ocxtreme.org/forumenus/showthread.php?t=332


(sorry Dinos no skim reading stops this time :p: )

KTE
12-07-2007, 03:00 AM
My findings on the matter

http://www.ocxtreme.org/forumenus/showthread.php?t=332Very nice comparison, appreciated. :)

NVM

tiborrr
12-07-2007, 04:16 AM
Since were talkinga bout PPC ratio - what is the good PPC ratio for E2xx0 chips? Pi 1M that is.

Eldonko
12-07-2007, 09:49 AM
Very nice comparison, appreciated. :)

NVMagreed, JB did a great job w/ that thread. :toast:

tiborrr
12-09-2007, 02:12 PM
Since were talkinga bout PPC ratio - what is the good PPC ratio for E2xx0 chips? Pi 1M that is.

Anyone?

KTE
12-09-2007, 04:25 PM
Anyone?
If I knew I'd tell ya but I don't because calculating pi fastest doesn't interest me. :)

I think 58k is around decent for 1M and around 2,885k for 32M with those chips.