Results 1 to 8 of 8

Thread: A warning about Sandra 2007

  1. #1
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349

    A warning about Sandra 2007

    I don't know about the other tests, but be forewarned that the results may not be as accurate as you think.

    For example, I just did benched my craptop CPU (Mobile Athlon64 3000+) for FLOP count, and under the new program, it says that I am getting 3189/5202 (core FPU/iSSE2 MFLOPS).

    In Sandra SR3, I get 2815/3626 (core FPU/iSSE2 MFLOPS), and I believe that the older version is actually a more accurate representation than the newer one.

    Therefore; be careful with the results coming from the new one because it is NOT an apples-to-apples comparison (at least from the little bit that I've seen thus far. And it only takes one to doubt/question the remainder of the results).

    (There's no freakin' way to account for the ~40% difference, even if I COULD overclock a laptop 40%.)
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  2. #2
    Xtreme Enthusiast
    Join Date
    Nov 2004
    Location
    Denmark
    Posts
    817
    It might be using a different way to test the components?? I remember using the wrong version of Sandra, compared to what others were using, and my results were way low... As long as there is a conclusive agreement as to which program you use, I don't see a problem

    Best Regards
    Silverstone RAVEN RV02|
    Core i5 2500K@4.4GHz, 1,300V|
    Corsair A70|ASUS P67 Sabertooth|Creative X-Fi Titanium Fatal1ty|
    Corsair Dominator DDR1600 4x4096MB@DDR3-1600@1.65V|Sapphire HD7970 3GB 1075/1475MHz|
    Corsair Force F120 120GB SSD SATA-II, WD Caviar Black 2x1TB SATA-II 32mb, Hitatchi 320GB SATA-II 16mb|Silverstone DA750 750w PSU|

  3. #3
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by DTU_XaVier
    It might be using a different way to test the components?? I remember using the wrong version of Sandra, compared to what others were using, and my results were way low... As long as there is a conclusive agreement as to which program you use, I don't see a problem

    Best Regards
    Well, that's fine and all, but as of 2005 SR3, results from versions up to that (probably dating as far back as 2001) is in agreement with the FLOP counts published on the Top500 list.

    Granted, I do admit that it is different methods of measuring, but the comparisons between LINPACK/LAPACK used for Top500, puts the x86 stuff inline and the results make sense.

    What it DOESN'T explain is how can the program, from the same company, have a +/- 17.5% MIN difference on the results?

    What they're reporting as core FPU is pretty close to being the same as with iSSE2. How can that be?

    The results doesn't make sense to me at all. Not in absolute terms. Not in relative terms. Not in comparison with LINPACK/LAPACK results (per Top500).
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  4. #4
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Update:

    This is a post that I wrote (pulled from the hwbot forum) about some of the significant flaw about the latest Sisoft Sandra versions from 2007 and up.

    Quote Originally Posted by mtzki
    You can use 2007 if you like. Many people have trouble at getting the 2005 version to start.

    The boints may be removed from the ranking. Sandra isn't very suitable for hwbot mainly due to the many different versions which give different scores.
    According to the offical statements, the hwbot Sandra score is suppose to be a floating point (FP) benchmark.

    The results that are being posted, if you look at them and understand what they're actually posting are the results for MIPS (an acronym for million instructions per second), and NOT MFLOPS (million floating point operations per second).

    Therefore;

    a) it's no wonder why people are able to get dramatically varying results from one version to another because the metric that you are testing for isn't accurately measured.

    and b) it's not testing what it's stated to be looking for.

    I just ran a test using my quad Opteron 870 (2.0 GHz, dual core, total of 8 cores) using Sandra 2001se.

    On the old version, I got 24864 MIPS, and 32129 MFLOPS.

    On Sandra 2005 SR1, I get 77895 MIPS, and 32121 MFLOPS.

    At least the ACTUAL result which this benchmark CLAIMS to be looking for is consistent, despite the system being 5 years NEWER than the oldest benchmark I was able to test with; and <= 1 year compared to the latest.

    (I do not have Sandra 2007 results available at this moment).

    MIPS has very little meaning. MFLOPS is a metric that the entire scientific, and high performance computing (HPC) goes by, which includes the Top500 ranking.

    That also implies the following:

    1) that people change what they're posting/reporting to somethign that actually has meaning

    2) that the hwbot committee would have to parse the results to ensure compliance (which I seriously doubt would happen), or

    3) that this is a completely utterless useless benchmark, with zero value whatsoever, other than getting people to be able to post their screencaps of the Sandra result so that an actual database of performance can be built/developed.

    *edit*
    I think that people would have more luck running Sandra 2005 SR1, which I believe is still available at least from www.guru3D.com

    *edit*
    Sandra 2007 SR1 (2007.8.10.105)
    59298 MIPS
    43545 MFLOPS

    Sandra 2007 Xl (2007.1.11.17)
    59288 MIPS
    43604 MFLOPS

    I've also mentioned it before (elsewhere as well), that I don't trust any of the 2007 results because suddenly, by changing ONE version of the software, my system is now 30% faster???

    I doesn't make any sense that a program that's 5 years old can be within 0.0025% of a program relaesed in 2005, but all of a sudden, my system just "magically" happened to be 30% faster?

    I'm sorry, but that just doesn't fly/jive with me.

    The results are way out of bounds of acceptable tolerances. On top of that, it isn't in line with any of the other standard metrics for performance despite the difference in system architectures, OS, compilers, usage, etc.

    *edit*
    Here are the latest benchmark results using my dual Opteron 246 (2.0 GHz, single core)

    Sandra 2001se (2001.3.7.50)
    11421 MIPS
    5542 core FPU MFLOPS
    7982 iSSE2 MFLOPS

    Sandra 2005 SR1 (2005.3.10.50)
    18292 MIPS
    6224 core FPU MFLOPS
    8121 iSSE2 MFLOPS

    Sandra 2007 SP1 (2007.8.10.105)
    14378 MIPS
    8179 core FPU MFLOPS
    11716 iSSE2 MFLOPS

    Sandra 2007 Xl (2007.1.11.17)
    14381 MIPS
    8190 core FPU MFLOPS
    11335 iSSE2 MFLOPS

    From 2001se to 2005 SR1 = 1.74% difference.

    From 2005 SR1 to 2007 SP1 = 44.27% difference!!!
    Last edited by alpha754293; 12-12-2006 at 06:41 PM.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  5. #5
    hwbot crew
    Join Date
    Jun 2002
    Location
    Belgium!
    Posts
    880
    What to do with the hwbot sisoft ranking?

    I'd say only allow 2007, but drop hwboints.
    HTPC (win xp): Turion MT-30 @ 2Ghz | NF4 | XFX 7900GT | 26" TFT
    Development (mac osx): Macbook Pro | Core Duo 1.86Ghz | 1.5GB DDR2


  6. #6
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Calgary, AB
    Posts
    2,219
    Quote Originally Posted by RichBa5tard
    What to do with the hwbot sisoft ranking?

    I'd say only allow 2007, but drop hwboints.
    I think Sandra can be fine for hwbot ranking IF you designate 1 particular version and drop all results using any version other than that. The results in there now are a mess (i.e. CPU Floating Point), using 05 gives a lot more points than 07 for the same settings and some use one version and some use the other.
    MB Reviewer for HWC
    Team OCX Bench Team

  7. #7
    Xtreme Enthusiast
    Join Date
    Aug 2004
    Location
    Sydney, Oz.
    Posts
    850
    lol im disappointed. after the title warning i thought maybe sandra 2007 blew your cpu up. :p

  8. #8
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    If you guys actually head over the hwbot forum; I wrote about it and at least someone there also consciously recognizes that the benchmark isn't perfect and/or flawless.

    Having said that, I'm also currently working (slowly) on preparing LINPACK for Windows as a self-packaged executable binary (so that you don't have to compile it).

    I do have some preliminary numbers up already, but they haven't been checked (at all). It was just for me to get acquainted with running and compiling the benchmark itself.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •