Well, we'll just have to wait and see. :) It depends on the controllers ability to handle the load, but maybe it's just as good or better than the m4.
Printable View
Well, we'll just have to wait and see. :) It depends on the controllers ability to handle the load, but maybe it's just as good or better than the m4.
What's going on with the C300?
I don't know. Haven't seen anything from Vapor in a long time.
Well, it's probably time for a chart update too.
It's been 200TiB since the last update :D
Kingston SSDNow 40GB (X25-V)
423.28GB Host writes
Reallocated sectors : 12
MD5 OK
33.86MiB/s on avg (~49 hours)
--
Corsair Force 3 120GB
01 89/50 (Raw read error rate)
05 2 (Retired Block count)
B1 86 (Wear range delta)
E6 100 (Life curve status)
E7 37 (SSD Life left)
E9 238892 (Raw writes)
F1 318081 (Host writes)
MD5 OK
102.60MiB/s on avg (~49 hours)
power on hours : 954
B1 (WRD) has gone from 87 to 86, looks like there is some activity on wear leveling :)
Mushkin Chronos Deluxe 60 Update, Day 34
3.3.2FW
05 2
Retired Block Count
B1 13
Wear Range Delta
F1 323050
Host Writes
E9 249173
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 126.91MB/s Avg
Intel RST drivers, Celeron G530 Biostar TH67+
778 Hours Work (24hrs since the last update)
Time 2 month 10 hours
11 GiB Minimum Free Space
Attachment 121673
I missed the 2 new reallocation events today. In the grand scheme of things two reallocs aren't very consequential, but I think it's noteworthy until it happens every day. The M4 is smoking the competition in that respect.
Speed went back up to normal, about 129MBs.
Kingston SSDNow 40GB (X25-V)
424.52GB Host writes
Reallocated sectors : 12
MD5 OK
33.66MiB/s on avg (~60 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 85 (Wear range delta)
E6 100 (Life curve status)
E7 37 (SSD Life left)
E9 241890 (Raw writes)
F1 322068 (Host writes)
MD5 OK
102.56MiB/s on avg (~60 hours)
power on hours : 965
B1 (WRD) is down again, from 86 to 85, slowly decreasing.
M225->Vertex Turbo 64GB Update:
588.11 TiB (646.64 TB) total
1579.88 hours
11822 Raw Wear
117.46 MB/s avg for the last 15.76 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 10.
(1=Bank 6/Block 2406; 2=Bank 3/Block 3925; 3=Bank 0/Block 1766; 4=Bank 0/Block 829; 5=Bank 4/Block 3191; 6=Bank 7/Block 937; 7=Bank 7/Block 1980; 8=Bank 7/Block 442; 9=Bank 7/Block 700; 10=Bank 2/Block 1066)
Attachment 121683
Todays update:
m4
758.0663 TiB
2714 hours
Avg speed 87.82 MiB/s.
AD gone from 179 to 174.
P/E 13152.
Value 01 (raw read error rate) has changed from 23 to 30.
MD5 OK.
Reallocated sectors : 00
Attachment 121700Attachment 121701
Kingston V+100:
179.6779 TiB
710 hours
Avg speed 77.27 MiB/s
AD gone from 20 to 17.
P/E?
MD5 OK.
Reallocated sectors : 00
Attachment 121702Attachment 121703
Intel X25-M G1 80GB
11,1354 TiB Host writes
18674hours
Reallocated sectors : 00
MWI=92 to 86
MD5 OK
41.31MiB/s on avg
Attachment 121698Attachment 121699
The Mtron is a strange animal. I've tried both my internal SATA ports on 2 different motherboards, USB and esata on the same (M4A89TD Pro/Sabertooth 990FX) and USB and esata on my work laptop with no luck. I can get it to show in disk management but when I click on it the program hangs. I've also tried SATA 3Gb/s and 6Gb/s with no luck.
The mtron is either just really picky, or really broken.
I'm going to try it in my folding rig during the weekend and see if that makes any difference. I can change between SATA 3 Gb/s and 1,5 Gb/s on that motherboard.
Curious!
Strange that it did not work with USB either.
I bet it works with the folding rig, though.
Lets hope so. If not then I might need to buy an old pc just to make it work.
That would be the only reason to buy a Pentium IV 478 system.
Talk about downgrading :rofl:
My Socket 478 Intel PERL865 had SATA 1.5GBPS and 0 + 1. My P4 2.6C was probably better in IPC than Bulldozer... [cheap shot at Bulldozer]
That motherboard was great. I always regretted not getting one of the other brand boards (because you couldn't really overclock it), but it had optical audio outputs, Intel GbE, and was working nice with my original WD Raptor 36GB drive... 36GB is still a lot to me.
It's always fun to have system that you remember for years after :) I've been going the amd way since 2003 and I'm not still there yet. Maybe I need that SB rig sooner than I thougt.
What happened to Vapor ? What happened to the C300. Thanks !
Kingston SSDNow 40GB (X25-V)
426.12GB Host writes
Reallocated sectors : 12
MD5 OK
33.48MiB/s on avg (~74 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 84 (Wear range delta)
E6 100 (Life curve status)
E7 35 (SSD Life left)
E9 245727 (Raw writes)
F1 327164 (Host writes)
MD5 OK
102.56MiB/s on avg (~74 hours)
power on hours : 979
B1 (WRD) is down again, from 85 to 84.
Mushkin Chronos Deluxe 60 Update, Day 35
3.3.2FW
05 4
Retired Block Count
B1 11
Wear Range Delta
F1 334738
Host Writes
E9 258195
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 127.72MB/s Avg
Intel RST drivers, Celeron G530 Biostar TH67+
804 Hours Work (26hrs since the last update)
Time 1 month 3 days 12 hours
12 GiB Minimum Free Space
Attachment 121710
If the G1 is only going to write as fast as an X25-V, it will take something like 2 years to kill it.
2 years is a long time to test :shocked:
I need to develop some routine for SE. To bad intel closed off the toolbox for G1, because then it would have been real easy.
That's kind of a tough problem to solve. If you assume that the flash in the G1s lasts twice as long as the 34nm flavor, and you have twice as much of it as a X25-V, then factor in the sustained write speed (very similar to the X25-V)...
And you have a problem.
One thing you (potentially) could do to keep speed higher for longer is over provision the drive, but that's just going to make the drive last even longer...
Stopping to secure erase the drive ever few days: takes a lot more time, and more direct supervision.
B.A.T., you have your work cut out for you...
Of course, the M4 might last for another couple years too. Who knows? I would hate to bet against it at this point. And so far every drive has been a little surprising, but none more so than the M4. It's too bad the drives don't explode when they're done. That would be way more exciting.
I'll just have to find a way. SE every 2 day takes 5 min so it might be possible. It will be less work when the m4 is finished, but that might take weeks. It still very impressive even if the ecc has started to work.
All of the drives have been impressive. Maybe not the F40-A. But the M4 and both Intels are certainly impressive. The M4 is just getting "broken in".
Kingston SSDNow 40GB (X25-V)
427.32GB Host writes
Reallocated sectors : 12
MD5 OK
33.39MiB/s on avg (~84 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 83 (Wear range delta)
E6 100 (Life curve status)
E7 35 (SSD Life left)
E9 248616 (Raw writes)
F1 331003 (Host writes)
MD5 OK
102.55MiB/s on avg (~84 hours)
power on hours : 990
B1 (WRD) is down again, from 84 to 83.
There is something you could try and that is simply filling the drive with large block sequential writes and then delete the file.
It won't be the same as TRIM of course but that was the "old" way of doing it, it can be done from ASU but the options is not available in the "open" beta.
Okay, my first ssd was the G2 so I've never tried those things before. What is the most easiest approach?
I've copied acronis true image backup files to a striped raid array before, as its just one huge file (like 20 GB). Then you just delete it. I'm not sure how well it worked as I was trying to prevent performance degradation, so it wasn't terribly slow yet. Plus, that was with OPed G2 X25-Vs. The G2s were more resilient to begin with, so you may want to experiment with something like HD tach to take a performance baseline, then copy some huge .ISO/.tib/.whatever files and determine efficacy.
I can try something like that, it can actually just be a large zip file too. I'll test this tonight.
How long before the 320 is really dead with the slowing writes and all that ?
Todays update without screenshots. The forum just won't cooperate tonight. I'll put them in later in an edit.
m4
764.2669 TiB
2735 hours
Avg speed 88.25 MiB/s.
AD gone from 174 to 171.
P/E 13257.
Value 01 (raw read error rate) has changed from 30 to 43.
MD5 OK.
Reallocated sectors : 00
Kingston V+100:
185.1462 TiB
732 hours
Avg speed 75.81 MiB/s
AD gone from 17 to 8.
P/E?
MD5 OK.
Reallocated sectors : 00
Intel X25-M G1 80GB
13,7370 TiB Host writes
18696 hours
Reallocated sectors : 00
MWI=86 to 79
MD5 OK
52.08 MiB/s on avg
Filling the remaining space with a large file and deleting it worked like a charm. I'll just do that at the same time I update the screenshot
MTRON
Today I tried it on my work laptop without any luck. I get the same problems I had yesterday.
Kingston SSDNow 40GB (X25-V)
428.58GB Host writes
Reallocated sectors : 12
MD5 OK
33.31MiB/s on avg (~95 hours)
--
Corsair Force 3 120GB
01 85/50 (Raw read error rate)
05 2 (Retired Block count)
B1 82 (Wear range delta)
E6 100 (Life curve status)
E7 34 (SSD Life left)
E9 251665 (Raw writes)
F1 335055 (Host writes)
MD5 OK
102.54MiB/s on avg (~95 hours)
power on hours : 1001
B1 (WRD) is down again, it's now at 82.
I cant seem to upload any files either, so I'm going to post them when I can. I'm 450 miles away at the moment, but I've got screen captures so I'll handle that tomorrow... if XS cooperates.
Any more news about the Samsung and the 320 drives ? Vapor ?
Kingston SSDNow 40GB (X25-V)
430.00GB Host writes
Reallocated sectors : 12
MD5 OK
33.24MiB/s on avg (~108 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 81 (Wear range delta)
E6 100 (Life curve status)
E7 33 (SSD Life left)
E9 255090 (Raw writes)
F1 339608 (Host writes)
MD5 OK
102.56MiB/s on avg (~108 hours)
power on hours : 1013
B1 (WRD) is down again, it's now at 81.
Any chance of some insights into why the B1 values are so different between the three SF drives currently being tested? SF drives are supposed to limit the delta between most and least worn blocks, yet even Christopher’s drive seems to be above the parameters that SF specify. :shrug:
Ao1, mine is at 9 now.
M225->Vertex Turbo 64GB Update:
601.75 TiB (661.63 TB) total
1610.39 hours
12071 Raw Wear
117.32 MB/s avg for the last 17.49 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 10.
(1=Bank 6/Block 2406; 2=Bank 3/Block 3925; 3=Bank 0/Block 1766; 4=Bank 0/Block 829; 5=Bank 4/Block 3191; 6=Bank 7/Block 937; 7=Bank 7/Block 1980; 8=Bank 7/Block 442; 9=Bank 7/Block 700; 10=Bank 2/Block 1066)
Kingston SSDNow 40GB (X25-V)
431.31GB Host writes
Reallocated sectors : 12
MD5 OK
33.19MiB/s on avg (~120 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 80 (Wear range delta)
E6 100 (Life curve status)
E7 32 (SSD Life left)
E9 258282 (Raw writes)
F1 343849 (Host writes)
MD5 OK
102.58MiB/s on avg (~120 hours)
power on hours : 1025
B1 (WRD) is down again, from 81 to 80.
Looks like Vertex Turbo is going to overtake M4....:D This is a good race! An old timer and the new kid on the block.
Kingston SSDNow 40GB (X25-V)
432.82GB Host writes
Reallocated sectors : 12
MD5 OK
33.15MiB/s on avg (~133 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 79 (Wear range delta)
E6 100 (Life curve status)
E7 31 (SSD Life left)
E9 261885 (Raw writes)
F1 348640 (Host writes)
MD5 OK
102.58MiB/s on avg (~133 hours)
power on hours : 1038
B1 (WRD) is down again, from 80 to 79.
Updated MWI/B1 graph with a trend line added. Wear levelling between least and most worn blocks is so different its looks like three completely different drives.
@One_Hertz. Anything interesting from the post mortem?
http://imageshack.us/photo/my-images/546/unledaoa.png/
Ao1,
I could try to add more static data to the Mushkin. Isn't the force 3 using 45GB of static data?
no, I meant vertex writes faster (by about >20%) than M4 and still going. So, it has the chance to overtake M4. But realistically, I have a feeling that its gonna get tired and die before it overtakes M4. But we will see. That's the reason why this race is exciting...:)
Only I can't run mine 24/7 like he can.
Sadness!
I'm very happy with my new 24\7 system. I got the kinks out just in time to leave town for a couple days and have the confidence to just check in with remote apps. The whole system uses slightly more power than my cable box. Today is day 38, and I've averaged 9.6TB host, 7.5TB nand writes. Though I've lost several days time due to disconnects over the weeks, I'm sitting at 96 hours uptime!
It's not cause there is any problems. I have a dual boot system at work...during the day (work time) I have to use XP for now and at night I let the M225 run ASU on W7 Ent x64. So it only get ~16 hrs/day, excpept for weekends were it's run on W7 all weekend.
B.A.T
Fridays number. My rig has been down since friday afternoon and will be up again tomorrow. I moved it from work and back home.
I can still not upload pictures. Something is wrong with XS.
m4
768.5687 TiB
2749 hours
Avg speed 88.43 MiB/s.
AD gone from 171 to 169.
P/E 13330.
Value 01 (raw read error rate) has changed from 43 to 46.
MD5 OK.
Reallocated sectors : 00
Kingston V+100:
188.7754 TiB
746 hours
Avg speed 75.40 MiB/s
AD gone from 8 to 4.
P/E?
MD5 OK.
Reallocated sectors : 00
Intel X25-M G1 80GB
16,2689 TiB Host writes
18710 hours
Reallocated sectors : 00
MWI=79 to 75
MD5 OK
51.22 MiB/s on avg
MTRON
I will test it in my foldingrig tomorrow.
Kingston SSDNow 40GB (X25-V)
435.55GB Host writes
Reallocated sectors : 12
MD5 OK
33.30MiB/s on avg (~156 hours)
--
Corsair Force 3 120GB
01 95/50 (Raw read error rate)
05 2 (Retired Block count)
B1 77 (Wear range delta)
E6 100 (Life curve status)
E7 29 (SSD Life left)
E9 268470 (Raw writes)
F1 357387 (Host writes)
MD5 OK
103.25MiB/s on avg (~156 hours)
power on hours : 1062
B1 (WRD) is down again, from 79 to 77.
It's been working for almost 1 week, so, a new record on the new firmware.
Since my wife has gone into labor I'll stop the test until I'm back from the hospital. I'll fire it up again when we are all back home. :up:
:toast: Congrats!
:welcome: baby B.A.T.
Here is a chart that compares the amount of writes incurred before the MWI expired. The SF stats are based on the assumption that when the MWI gets to 10 the theoretical P/E cycle capability has expired. Considering Christopher & SynbiosVyse used compressible data (46%?) it didn’t really provide an advantage. Without compression writes would be well below the 320 & M4.
Congrats B.A.T. :up:
http://imageshack.us/photo/my-images/405/unledpsw.png/
128.3hrs uptime.
In two days I should hit almost 300/400TB
Congrats BAT!!
Congrats BAT!
or as we say in Norwegian
Gratulerer!
Kingston SSDNow 40GB (X25-V)
438.15GB Host writes
Reallocated sectors : 12
MD5 OK
33.22MiB/s on avg (~179 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 76 (Wear range delta)
E6 100 (Life curve status)
E7 27 (SSD Life left)
E9 274800 (Raw writes)
F1 365805 (Host writes)
MD5 OK
103.19MiB/s on avg (~179 hours)
power on hours : 1085
B1 (WRD) is down again, from 77 to 76.
The rig has been up for more than 12 days (yes, it's an Z68 :)) and so I'll probably give it a reboot sometime this week.
Mushkin Chronos Deluxe 60 Update, Day 40
3.3.2FW
05 4
Retired Block Count
B1 11
Wear Range Delta
F1 385237
Host Writes
E9 297182
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 128.36MB/s Avg
Intel RST drivers, Celeron G530 Biostar TH67+
915 Hours Work
Time 1 month 8 days 3 hours
12 GiB Minimum Free Space
Somehow, I got this picture from a few weeks ago up.
I don't know what the hell is going on here.
johnw
I expect it's the attachments he's complaining about.
Christopher
You should be able to remove the old attachments by removing the link to the post.
(the bottom half lists the linked attachments, uncheck the link found bottom right on each attachment)
http://www.ssdaddict.com/ss/Captureatt.png
Thank you for all the nice greatings :) We got a little boy this morning and both he and his mother is ok. I'll be back with new updates at the end of the week.
:clap: Very happy for you...I will have a nice drink along with cigar tonight for a toast.
Norway, +1
Well they said # 7 billion would be born somewhere today. Now we know it was Norway.:clap:
Congratulations on the little one! :up:
So, what should we do about the charts? If Vapor is incommunicado, should we create new charts?
I will try to find out if we need a new chartmaster.
--
Kingston SSDNow 40GB (X25-V)
439.83GB Host writes
Reallocated sectors : 12
MD5 OK
33.18MiB/s on avg (~194 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 75 (Wear range delta)
E6 100 (Life curve status)
E7 26 (SSD Life left)
E9 278821 (Raw writes)
F1 371149 (Host writes)
MD5 OK
103.16MiB/s on avg (~194 hours)
power on hours : 1100
B1 (WRD) is down again, from 76 to 75.
Congratulations Mr. Bat!
Kingston SSDNow 40GB (X25-V)
441.04TB Host writes
Reallocated sectors : 12
MD5 OK
33.15MiB/s on avg (~205 hours)
--
Corsair Force 3 120GB
01 84/50 (Raw read error rate)
05 2 (Retired Block count)
B1 74 (Wear range delta)
E6 100 (Life curve status)
E7 25 (SSD Life left)
E9 281747 (Raw writes)
F1 375038 (Host writes)
MD5 OK
103.13MiB/s on avg (~205 hours)
power on hours : 1111
B1 (WRD) is down again, from 75 to 74.
M225->Vertex Turbo 64GB Update:
638.07 TiB (701.57 TB) total
1691.12 hours
12727 Raw Wear
118.75 MB/s avg for the last 88.61 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) from 10 to 11.
(1=Bnk 6/Blk 2406; 2=Bnk 3/Blk 3925; 3=Bnk 0/Blk 1766; 4=Bnk 0/Blk 829; 5=Bnk 4/Blk 3191; 6=Bnk 7/Blk 937; 7=Bnk 7/Blk 1980; 8=Bnk 7/Blk 442; 9=Bnk 7/Blk 700; 10=Bnk 2/Blk 1066; 11=Bnk 7/Blck 85)
Thanks for spotting that error, I've been copying and just adjusting the values and GB/TB has slipped for the X25-V :)
(I've corrected the previous entry)
http://www.ssdaddict.com/ss/2011-11-01-19-52.PNG
Kingston SSDNow 40GB (X25-V)
442.42TB Host writes
Reallocated sectors : 12
MD5 OK
33.13MiB/s on avg (~217 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 73 (Wear range delta)
E6 100 (Life curve status)
E7 25 (SSD Life left)
E9 285083 (Raw writes)
F1 379474 (Host writes)
MD5 OK
103.12MiB/s on avg (~217 hours) 9 days 1 hour, 76.75TiB written so far during this session.
power on hours : 1123
B1 (WRD) is down again, from 74 to 73.
Mushkin Chronos Deluxe 60 Update, Day 32
3.3.2FW
05 4
Retired Block Count
B1 11
Wear Range Delta
F1 399688
Host Writes
E9 308340
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 128.14/s Avg
Intel RST drivers, Celeron G530 Biostar TH67+
947 Hours Work (24hrs since the last update)
Time 1 month 9 days 11 hours
12 GiB Minimum Free Space
176.8 hours uptime
Great Thread I thank you all that participate in testing
Milestone reached
310TB/402TB
Even if it is slightly arbitrary' its still a milestone.
Is the Samsung official dead now ? No more recovery attempts ?
Kingston SSDNow 40GB (X25-V)
443.59TB Host writes
Reallocated sectors : 12
MD5 OK
33.11MiB/s on avg (~227 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 72 (Wear range delta)
E6 100 (Life curve status)
E7 24 (SSD Life left)
E9 287906 (Raw writes)
F1 383230 (Host writes)
MD5 OK
103.10MiB/s on avg (~227 hours) >80TiB Host writes during this session.
power on hours : 1133
B1 (WRD) is down again, from 73 to 72.
Regarding the charts.
We'll give it this week and if there is no word we'll have to find someone that can temporarily do the charts.
It would be good to have a link to the excel charts in the first post that are refreshed every time the charts are updated. That way anyone can look and play with the data and if someone drops out it’s not a matter of starting from scratch.
There is no point doing this work if the data collected is not properly recorded and analysed. There are a lot of variables that don’t seem to be recorded. What compression factor has been used? What is the ratio of static data to free space? Events such as the deletion of static data are not being captured. Nand/ P/E cycle specs are not clear. Controller type/ version. Is the drive being tested an OS drive. What OS etc. etc.
I would like to suggest that each person testing is responsible to collate data for their drive in a pre agreed standard excel format and that one person is responsible for collating data from all the drives.
M225->Vertex Turbo 64GB Update:
649.68 TiB (714.34 TB) total
1716.03 hours
12931 Raw Wear
123.19 MB/s avg for the last 27.31 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 11.
(1=Bnk 6/Blk 2406; 2=Bnk 3/Blk 3925; 3=Bnk 0/Blk 1766; 4=Bnk 0/Blk 829; 5=Bnk 4/Blk 3191; 6=Bnk 7/Blk 937; 7=Bnk 7/Blk 1980; 8=Bnk 7/Blk 442; 9=Bnk 7/Blk 700; 10=Bnk 2/Blk 1066; 11=Bnk 7/Blck 85)
[EDIT] P.S. -- Are they ever going to fix the picture attachment problem or what?
Ao1,
Most ofthe data you speak of was mostly represented, at least until attachments broke.
WRD looks to be rising again after bottoming out at 9.Currently 12.
Ao1
I agree for the most part :)
It will make it easier for the one in charge of doing charts that input is given in a standard/common format so we'll have to do something about that.
As for what's recorded or not
The Endurance app records every byte written and what compression was/is used and as long as the log file is present one can easily find
- compression level used for each loop
- number of files and GB written for each loop
What is written and/or deleted outside the Endurance app is unknown, I'm not sure that it really matters that much as long as it's done to keep the drive up to date (as in fw updates and/or secure erasing due to issues) and not for other purposes.
Deletion of static files might be useful to record and is recorded in my case. (only applicable for the SF drive due to fw issues)
The exact P/E spec is known for the drives that have been opened, for the other drives MWI points to the expected P/E count for the drives. (there are a few drives that have not been opened for inspection yet, including mine :))
I see no point in opening my drives yet, I might do that when a data retention test is due for the drives. (BAT is having one now)
So, there is very little that's unknown/unaccounted for but we should of course agree on a standard for reporting and reporting issues such as cleaning, changing-/updating static files.
The drive (F40) used by SynbiosVyse is the one drive that I'm a bit uncertain of and I expect the log file is gone as the drive stopped working.
This is how I'm keeping track of mine...
http://img855.imageshack.us/img855/9334/excelo.png
Anvil,
is there any way to add a feature to ASU that will store the log on another drive? or a server?
@bluestang
Great work, I keep screenshots for every reporting but using a spreadsheet is what we all should do :up:
@Christopher
Not currently, I copy my "Endurance" log files to a dedicated folder on the boot drive for each drive used in the test.
I have thought about finding some other way to store the file but it requires to be stored in a subfolder of the Endurance app so the only thing that could change this is allowing the app to be run from a separate drive.
It might be an idea to "export" the data for storage on a "server", I'll have to think about that.
I could make it post updates to a webserver at some interval, a lot of options...