MMM
Results 1 to 25 of 391

Thread: The CDT and copywaza lab

Hybrid View

  1. #1
    I am Xtreme
    Join Date
    Jan 2005
    Posts
    4,714
    Quote Originally Posted by saaya View Post
    hey massman... thats weird though, why does it need that much space to calculate pi? it ultimately makes it a hard drive benchmark considering this bench is so old... back then the cache was so slow that the hdd must have played a big role in getting good results. now which uge caches and mem the problem isnt that much the hdd i guess.

    again, weird why it needs so much space...
    and 632mb... well we are talking about systems with 2gb of mem so why would pi write something to the hdd and not the mem, the mem should have plenty of space for any temp file pile that bench comes up with...

    copying what from d to c makes the pi run slower?
    - No, the 632MB is used to clean up the memory's matrix from all the other data. It's not about cleaning the HDD, I think. The HDD transfer is just a way to clean the memory.

    - Maybe, by limiting the ammount of memory, the data will be less spread over the memory's matrix and thus faster accesible? I believe you can use maxmen=600, 700, 500 as well, you have to alter the 3 files however to 732 and 532. Right, Kevin??

    - Not SuperPi runs slower, the copying itself is slower.
    Where courage, motivation and ignorance meet, a persistent idiot awakens.

  2. #2
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by massman View Post
    - No, the 632MB is used to clean up the memory's matrix from all the other data. It's not about cleaning the HDD, I think. The HDD transfer is just a way to clean the memory.
    how does it clean the memory?
    If you overwrite 11101010100101 with 0011101010101001 how does that "clean" the memory?

  3. #3
    Xtreme Mentor
    Join Date
    May 2007
    Posts
    2,792
    Quote Originally Posted by saaya View Post
    how does it clean the memory?
    If you overwrite 11101010100101 with 0011101010101001 how does that "clean" the memory?
    I'm not sure how it cleans memory, but here is one possibility:

    Here we have loss of time with these methods. The main purpose of a cache in any device is to decrease access time as well. The larger the cache hit ratio, the better the prefetch, the faster the access time, the quicker the application/instruction time. This may hold us some clues.

    You have some RAM address ranges reserved for hardware I/O mapping in Windows by default and others reserved deemed necessary for kernel level code. On top of this you'll have extra taken up by basic system processes and they tend to accrue and not release from memory even if the address range is no longer needed. Thus you'll have a section of memory unaddressable for no reason.

    When you force the "cleansing" by something which will either a) defrag the RAM b) force it empty, such as a bigger process requiring the full RAM, naturally the Windows cache and memory will empty all unneeded extra address ranges which were reserved by other applications (prioritize) and start filling up if you've given Windows priority to real-time Programs. FWIW the HDD pagefile is also a cache managed by Windows kernel level. Then rather than the RAR files still retaining in memory as many applications will, after the copying finishes, they are released immediately and that part of memory becomes freely available to everything subsequently. I "suspect" the type of files matter (RAR files) but I'll try it more thoroughly soon.

    That's as far as I can see if memory increase does play a role and how. For me, this has all to do with the various caches and prefetch algorithms as this is their well known function - to increase speed and decrease latency.

  4. #4
    Banned
    Join Date
    Jan 2004
    Location
    Land of Buckeye
    Posts
    2,881
    Quote Originally Posted by KTE View Post
    I'm not sure how it cleans memory, but here is one possibility:

    Here we have loss of time with these methods. The main purpose of a cache in any device is to decrease access time as well. The larger the cache hit ratio, the better the prefetch, the faster the access time, the quicker the application/instruction time. This may hold us some clues.

    You have some RAM address ranges reserved for hardware I/O mapping in Windows by default and others reserved deemed necessary for kernel level code. On top of this you'll have extra taken up by basic system processes and they tend to accrue and not release from memory even if the address range is no longer needed. Thus you'll have a section of memory unaddressable for no reason.

    When you force the "cleansing" by something which will either a) defrag the RAM b) force it empty, such as a bigger process requiring the full RAM, naturally the Windows cache and memory will empty all unneeded extra address ranges which were reserved by other applications (prioritize) and start filling up if you've given Windows priority to real-time Programs. FWIW the HDD pagefile is also a cache managed by Windows kernel level. Then rather than the RAR files still retaining in memory as many applications will, after the copying finishes, they are released immediately and that part of memory becomes freely available to everything subsequently. I "suspect" the type of files matter (RAR files) but I'll try it more thoroughly soon.

    That's as far as I can see if memory increase does play a role and how. For me, this has all to do with the various caches and prefetch algorithms as this is their well known function - to increase speed and decrease latency.

    Based on what your statement here, what I would like to add is
    THe type of RAR file doesn't matter but its size.

  5. #5
    Banned
    Join Date
    Aug 2007
    Posts
    1,014
    Size is what i'm going to test now

    around 2Gb vs 632Mb

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •