Page 3 of 4 FirstFirst 1234 LastLast
Results 51 to 75 of 88

Thread: xtremeoverlocking - pushing the 3930K/3960X to 4.5 GHz

  1. #51
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    I did OC it to induce infant mortality.

    http://www.xtremesystems.org/forums/...-Noctua-NH-D14

    Keep in mind that's with panaflo fans on the heatsink and not the stock ones.

    3930K @ 4.6GHz 1.33-1.34v Asus P9X79 WS, 16gb DDR-1866 G.Skill Ripjaws.Z, 8400GS, LSI 9265 8i with 4 240GB Intel 520 SSD's.

    I ran it like that fully loaded with WCG for two weeks.
    Last edited by Bun-Bun; 03-26-2012 at 11:53 AM.

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  2. #52
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Bun-Bun: Are you able to try running LINPACK continuously for a while and then measure the peak temperatures? (Intel Burn Test works too). How much RAM is in that system?
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  3. #53
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    I edited the post.

    As the server is already deployed I can't run anything additional on it.

    However I always do my initial stress testing with Linx with AVX enabled routines. The peak core temperature was somewhere in the neighbourhood of 72-75°C with 20°C ambient.

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  4. #54
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Bun-Bun:
    I've found that LinX doesn't actually load up the processors as much as Intel Burn Test. I got higher temps on the 990X with IBT than LinX.

    Curious question - does AVX actually USE the FPADD/FPMUL units? From the little bit that I've read, it seems like it just do a lot of data move operations/instruction, but no actual computations.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  5. #55
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    Are you sure you were comparing LinX AVX to IBT? LinX AVX should be the same thing and makes SB chips the hottest of anything else I have tried.

    990X support AVX?

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  6. #56
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Bun-Bun:
    No, 990X does not support AVX.

    I was comparing LinX to IBT (sans AVX). Did you try running IBT? I ran it at the max setting and also at the 'very high' setting. Average runtime was about 120 seconds (61 GFLOPS). LinX was was like 2-3 C lower than IBT. Maybe they have an AVX-enabled version of IBT by now? *shrug*
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  7. #57
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    I have never ran IBT AFAIK it is the same routines with a different front end.

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  8. #58
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by Bun-Bun View Post
    I have never ran IBT AFAIK it is the same routines with a different front end.
    It runs differently in my experience.

    I actually tried compiling LINPACK myself, but being sooo NOT a programmer, I could only get to run on one core at a time. And also being sooo NOT a programmer, I also couldn't get it to spawn multiple slave processes that basically does the same calculation, n-core times.

    (That and I've discovered that the size of the problem changes the end result (GFLOPS)).
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  9. #59
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Bun-Bun:
    Just ran it on my workstation (dual Xeon X5620, no AVX).

    Running for problem size of 4096 MB (~23112). Average runtime was about 113 s.

    IBT running for the same problem size of 4096 MB: average runtime was 105 s.

    linx.jpgIBT.jpg

    Respective temps are shown.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  10. #60
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    Must be running different versions of the libraries.

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  11. #61
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by alpha754293 View Post
    Yea, I know that it's an unpredictable function between voltage and clock speed. (sigh...) The fact that it CAN potentially get quite hot (and the fact that the voltages CAN be quite high) is why I had asked about the premature death question. Couldn't put numbers to it before, but the cause for concern still remain the same.

    I ended up finding a 4U rackmount that has 120 mm fans inside. I forget who makes the 80 mm fans that's in my 4U. It's a pity that the watercooling isn't the one that's sending the feedback signal to the PWM controller for the fans.

    I can make the app use as many processors as I want. However, to answer your question, for this particular type/class of analysis, HTT is currently showing a 3-12% DECREASE in performance (vs. without HTT). I haven't tested it with CFD yet (either using the same program or using a different program). Also note that the HTT tests were performed on current generation Xeons.

    I also just realized that if I get a rackmount, I might actually have to flip the air flow direction around so that it actually goes back-to-front. Hmmm....

    Bun-Bun - are you volunteering to do the testing for me?

    BeepBeep2 - No, a little bit smaller still.
    re: faster waterflow - sort of. upto a certain point.
    re: noise - be clearer next time (about what it is that you're talking about). To make generic statements means that it is subject to the test of universality.
    re: $hit straight - considering that I've been able to highlight some blantant flaws in your points of argument...the rest is self explanatory.
    re: rackmount - it was a tongue-in-cheek commentary on the absurdity of some of the comments/replies. Constantly beating on "it depends on voltage/speed/what processor you get" is about as useful as "no freakin' clue." So, while yes, the statement is true - how would presenting the null or void hypothesis be useful at all? In fact, I've already mentioned SEVERAL times that "yes, it depends on voltage and speed" (perhaps not as blantantly spelled out for you, but here it is, blantantly spelled out), but I've also already told you that if you have a bunch of people OCing to 4.5 GHz, you're going to end up with some kind of statistical distribution on both voltage, speed, AND temperature.

    Considering that leeghoofd is able to say that 4.5 GHz (+/- 0.1 GHz) @ 1.3 V (+/- some value V) and running at 77.5 C (+/- 2.5 C) average just proves that. Granted, that's just his sample data point. I'm going to guess that he's not the only person that's ever OC'd to 4.5 GHz and so there's going to be more data and you can fully form the 3D statistical surface. (some higher voltage, some lower voltage, some faster, some slower, some hotter, some cooler). But it's going to be within some range on some kind of statistical distribution. (I hate stats. Grrr...)

    You do realize that they make ATX compatible rackmounts right? And that the ATX spec only refers to the size of the motherboard, the mounting hole locations and stuff. For all practical intents and purposes of the discussion, I could have easily said "I'm going to mount it on ATX spec plywood". (I won't) but the point is that whether it's rackmount or tower - it doesn't matter. ATX spec does not govern what form the enclosure comes in. Therefore; the part where you said "use ATX specification part" has nothing to do with whether the people here are familiar with rackmounts or not. Here's a list of ATX compatible rackmount enclosures from Newegg (211 results): http://www.newegg.com/Product/Produc...=1&PageSize=20

    re: "pretty darn close"
    Attachment 124874
    (Not my work, NCAC model).

    I'm trying to find the SAE paper or the AEI article comparing the physical crushing of a tube to the simulation and how close the simulation has gotten. (Not that you would actually care... - I mean...it only protects your LIFE in your car. Yes, I am assuming that you drive. Or been in one.)
    Look, several times you either took what I said out of context or assumed I was generalizing the whole real-world when I was obviously talking about things going into the machine you were building...and if you're going to claim that you highlighted blatant flaws in my arguments, you've had just as many, if not more... That isn't exactly one sided, so you've needed to get your "$hit straight" as well if I have.

    The rest of your post is pretty obvious, and easy to understand. Yes, of course, you can come up with a statistical distribution, I never said you couldn't. Though, it is rather complicated...the motherboard and the VRM components feeding the CPU can play an effect as well. Eventually you can get a rough idea of how the chips do on average but you never know what yours will until it is in hand. That's all I was trying to say, and why you shouldn't have asked about running stable at XXX time at XXX load because the voltage, temperatures, and overall overclockability of your exact chip will come into play.

    As far as ATX compatible rackmounts, I know...I apologize for being unclear. I meant most Desktop cases in ATX form factor...like the mid-tower Corsair 500R you were suggested. Earlier in another post you asked why I neglected to calculate the 200mm fan too, and I'd already told you several times I thought they were useless.

    With the simulation of the car wreck, that is pretty damned nice...however I was meaning more about how the car would handle, accelerate, brake, and perform in general. Anyway, that is irrelevant.

    I see you've gotten partial with Bun-Bun just because he is a fellow despite the fact that he's pointed out your "blatant flaws" in arguments as well, so good luck with your build.
    Last edited by BeepBeep2; 03-26-2012 at 01:56 PM.
    Smile

  12. #62
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by Bun-Bun View Post
    Must be running different versions of the libraries.
    *shrug* dunno.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  13. #63
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by BeepBeep2 View Post
    Look, several times you either took what I said out of context or assumed I was generalizing the whole real-world when I was obviously talking about things going into the machine you were building...and if you're going to claim that you highlighted blatant flaws in my arguments, you've had just as many, if not more... That isn't exactly one sided, so you've needed to get your "$hit straight" as well if I have.

    The rest of your post is pretty obvious, and easy to understand. Yes, of course, you can come up with a statistical distribution, I never said you couldn't. Though, it is rather complicated...the motherboard and the VRM components feeding the CPU can play an effect as well. Eventually you can get a rough idea of how the chips do on average but you never know what yours will until it is in hand. That's all I was trying to say, and why you shouldn't have asked about running stable at XXX time at XXX load because the voltage, temperatures, and overall overclockability of your exact chip will come into play.

    As far as ATX compatible rackmounts, I know...I apologize for being unclear. I meant most Desktop cases in ATX form factor...like the mid-tower Corsair 500R you were suggested. Earlier in another post you asked why I neglected to calculate the 200mm fan too, and I'd already told you several times I thought they were useless.

    With the simulation of the car wreck, that is pretty damned nice...however I was meaning more about how the car would handle, accelerate, brake, and perform in general. Anyway, that is irrelevant.

    I see you've gotten partial with Bun-Bun just because he is a fellow despite the fact that he's pointed out your "blatant flaws" in arguments as well, so good luck with your build.
    a) I can ask whatever question I want to. Isn't that kind of the point of ASKING??? and b) you KNOW as well as I know (based on the information that you've provided) that if you're going to have a $hitty board, CPU, VRMs, voltages, temps, and clock speeds; running it close to the limits of the processor PROBABLY isn't going to be a good idea no matter WHAT you do to it. Now, I know you're probably going to come back and say something like "time to get better cooling." But that's like saying that "you have a $hit car, time for you to get a better car/engine." While yes, that's obviously a possible solution, within the constrants of the problem, that's not possible. In fact, if anything this is like if I were to going to ask you an engineering question, and you come up with this VERY elaborate answer that costs half-a-mil. Conversely, the guy next to you, could do it for $40. Does that mean that your solution is not a solution? No. It's still a solution. But it's one that lies outside of the constraint of the problem.

    As previously mentioned and referenced MULTIPLE times, if I'm am talking about going with a closed kit solution; while yes, an open kit IS a possible solution to the problem as well, it's pretty clear that I'm not going in that direct. In fact, you even admitted it YOURSELF that I'm not likely going in that direction. So....my question to you then --- IF you knew that, then why would you continue to even mention it/bring it up? That would be like if you were to constantly bring up "but look...I've got this reallly PRETTY half-a-million-dollar solution that's sitting right here, WAITING to be loved and cared for." It's pointless.

    Conversely, the question of my asking if the system can run at x speed at y load for z time is NOT a question that I shouldn't be asking. In fact, it's one that I SHOULD VERY much be asking because THAT IS EXACTLY WHAT I AM GOING TO BE DOING WITH THE SYSTEM. What's the first question that you ask someone when they ask you for a build advice? "What are you going to do with it right?" Well...there it is. There's the answer to that question. And more importantly, it's not just an answer to that question, it is an outright statement of requirement.

    So, I'm going to ask you again, (but this time I'm gonna throw numbers that I'm picking outta my arse into it just for the heck of it) "will I be able to run an OC'd 3930K to 4.5 GHz at 1.4 V at 89 C for 30 days striaght?" (note that that question makes no reference to airflow or the presence/absence of a side fan/what cooling solution it's using. It's a very simple question.) And I would suspect that the answer probably is or ought to be "no" (running it at that load at that voltage at that temperature for that duration is probably NOT a good idea.)

    Which means, well...I might not be able to do much about the voltage at that speed, but I can reduce the speed and potentially reduce the voltage requirement? Which in turn, drops the temperature? Or I can be asked "can I perform the analysis in bursts? Run it for a short period of time, then stop, then pick up from where I left off of?" (Still haven't touched the topics of airflow, sidefan, cooling system).

    You've got w,x,y,z variables that you can adjust/dial. (Well...actually, it's more like x,y,z). But in your replies, you've been saying "well...you could fiddle with a, g, and k"...and while you're absolutely right, I can, but that's not x, y, or z, now, is it?

    Which to me means either you don't understand the question or the problem that's being presented to you, or you're simply and COMPLETELY ignoring the fact that it's a bounded problem. And in my opinion, it wouldn't have been so bad if you FIRST addressed the bounds of the problem (you can do this to x, y, and z, FIRST) BEFORE going on to anything/whatever else (airflow directions, presence/absence of a sidefan, changing to a different cooling solution.) But instead, you jumped straight to the latter, without even really first considering to fully answer the initial former question.

    And then, furthermore, instead, chided me for asking the question in the first place - when that IS a performance requirement. (I've got a run going right now on my old 8-core machine that's taking 168 hours. I had another one that ran for 97 hours, but then I messed up before I forgot to include a limiting ground plane so that the vehicles don't just fall/fly off, so I'll have to add that back in and run it again for another 97 hours.)

    The other way that someone (such as yourself) could have answered the question is "based on a sample space of n=26, of the people who's overclocked their 3930K to 4.5 GHz, here's what the mean voltage is, here's the mean max temperature is while running x, and here's the mean total running time." Then I know that either I will probably get a chip that falls somewhere in between those bounds, OR if I get one that's actually out of bounds, I would be able to quickly assess that it needs to be RMA'd because of whatever fault the chip may have. You know that you COULD have answered the very same question in that fashion too right?

    It can be as complicated as you want to make it or it can be as simple as you can make it. To say that it depends on the motherboard and VRM...while yes..that's true too, but is that something that I have any control over? No. That would be like if you were to had said (tongue-in-cheek) "well...it actually depends on the phase of the moon." Great. Nice. Possibly true, but equally useless because it isn't something that I have any control over. And at the end of those answers, I'm no better off than when I started. (Why bother asking the question if the first place if this is the answer that I'm going to get?) Personally, I prefer the statistical one better. Just personal preference. At least then, at the very least, I can run an ANOVA on it.

    re: vehicle dynamics
    Actually, they do have 1D uhhh..."empirical" based vehicle dynamics models as well. CarSim is awesome. Especially when you have it hooked into MATLAB/Simulink and you can just drive around and get a "feel" for how the car responds/reacts.

    Like I said, it would be nice if I can actually solve vehicle dynamics at the FE level and be able to do all of the thermal/fluid/aero/mechanics calculations as a fully coupled system. We're not quite there yet, but I'm pretty sure that SOMEONE's working on it.

    (For example, all of the suspension analysis and whether your LCA will live, is fed back into the vehicle dynamics model as a 1D stiffness equation (for example).)

    Nahhh...it has nothing to do with him being a fellow. Well maybe it does. (In the sense that he can present information clearly, concisely, and logically.) A very "scientific method" if you will. There's a statement or a hypothesis. We debate the validity of said statement and/or hypothesis. He's ran his tests. I just ran mine. And we share and compare notes and results at the end of it. It's very simple.

    If the reason why you said that you don't know whether my chip will be able to OC to that level, and run it for that long, if you don't have the data, you can say "I don't have the data."

    Bun-Bun:
    I only remember my 990X being 2-3 C higher on IBT vs. LinX. Wow...didn't think that the Westmere Xeons was going to be THAT much different. upwards of 6 C. Damn. And that's with the latest LinX (which is supposed to be AVX compatible, even though my chip doesn't have AVX). And based on this data, I have reason to suspect that if IBT is also AVX-compatible, that the difference would still be about the same. Which would still put it at around the 72 C range at 4.6 GHz, which I'm guestimating/ballparking that at 4.5 GHz, it'd might hit as low as 70 C. That's still within my initial guestimates of between 70-85 C. And damn...that Noctua D14 is HUGE. With the Panaflo fans...it's even bigger!!! OMG...that thing's a MONSTER! haha. Awesome!

    (I don't know if IBT has an AVX-compatible version. I just downloaded the newest one they've got.)

    I downloaded the LINPACK FORTRAN 77 source code just before I left, changed it so that instead of solving for an order of 1000 double, to solve for an order of 10000 double. Compiled it with g77, no optimizations, and then wrote a shell script to launch 8 of them at the same time. I don't think that it'll push my system very hard, but we shall see the temps tomorrow.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  14. #64
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    IBT has AVX in its libraries AFAIK.

    And yes the Noctua is huge. It is a well built cooler. I am very impressed with it. The manual says to remove it during transportation as the weight exceeds the socket specifications.

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  15. #65
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by Bun-Bun View Post
    IBT has AVX in its libraries AFAIK.

    And yes the Noctua is huge. It is a well built cooler. I am very impressed with it. The manual says to remove it during transportation as the weight exceeds the socket specifications.
    *shrug* no clue.

    Yea...they've had that for a while. Even the original AMD Opterons had the same warning. I think that the socketed Athlons had the same warning too...
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  16. #66
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    It is only for the mere fact that he is a fellow and nothing to do with whatever he says.

    He told you that he felt insulted by the way you've spoken to everyone, told you that you were incorrect a number of times, and your first reply to him was this:
    Bun-Bun - the question is really simple. It's not that complicated. And while I agree that the direction and the general flow of air is important, as I've mentioned, I've stated why I'm not a big fan of side fans (and actually have the data, both physical and virtual) as to why. Not that I really should or need to justify myself to you guys. But at least I'm working with actual data, not just on gut feel.

    They're not a threat. They would be if they could actually argue on a premise than mere "gut feel".
    You instantly put him in a different category for the mere fact that he was a fellow, and became friendly with him despite the fact that he is ashamed by the way you speak to everyone.

    First of all, the only person going totally by gut feel was Leeghoofd...and he even mentioned some numbers though they weren't concrete.

    As soon as you realized that he was a fellow, you preferred him over anyone else here and you have continued to do so. Every time he mentions that you are wrong about an arguement or take something anyone here says WAY out of context, you pretty much turn a blind eye to correcting that problem with the others, because you are okay with it, and he is a fellow you can be friendly with. You only care about HIS advice on your machine, and instead of arguing with him, you listen to him intently because we're all stupid and he is a fellow which you assume is obviously much smarter than anyone else when it comes to overclocking and cooling computer components.
    Smile

  17. #67
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    hm.. I see some problems with your simulation and this thread.

    Side fans by far and large do suck, I will agree to that. However, your stipulation of the air reaching the socket height and nothing else was incorrect. The fan does not rotate until static balance is achieved and then stop moving air. The case has openings and/or other fans. Crossflow will be very weak with a single large side fan but the point of the side fan (although glossed over by most, including the "inventor/s") is to feed the CPU and/or GPU fan(s) and possibly help the convective process if their heatsinks are bare or they have a backplate, ie: GTX480. The airflow is weak enough to not be something to consider using to cool a performance system however as the extremely low airflow will only at best help the items in the case with a fan. Everything else relying on convective cooling will operate as if in a fanless environment, which might be fine at stock, but not in a performance system. This is assuming a fully functional system btw, as a bare-bones case with motherboard would be an incorrect simulation.

    As to fan performance, two fans rated at 100cfm can be different size or different design with the same size. One may allow higher pressure thus retaining higher CFM with restriction and the other may lose performance much faster. It is up to you to choose which is the correct fan for your application, though in all honesty even at 4.5ghz you dont really need much as long as you maintain enough crossflow in the case to keep the motherboard VRM from being in a convective pocket.

    For reference this is what I am currently running:
    Corsair Obsidian 800D with tray and backplane removed and replaced with Mountain Mods SR-2 tray and backplane. Yes it used to have an SR-2 in it, with westmere's near 4ghz, and a pair of NH-D14's
    2600K @ 4.8 1.26v idle, 1.49v load 4c/8t. Yeah my cpu kind of sucks.. 4.9 4c/4t takes 1.52v to be LinX/AVX stable.
    Noctua NH-D14
    Asus p8p67 pro
    4x4gb Mushkin PC3-10700 (667mhz) @ 800mhz c9
    2x nV GTX480 @ 1.0v 700/2000
    Ikonik Vulcan 1200w
    60gb ssd, 2tb wd black, 2x1tb wd black, 320gb seagate, 120gb usb)

    Fans
    CPU: 2x Noctua 140mm on NH-D14 in push-pull, no center fan
    top: 1x 120mm low flow thermaltake
    front: corsair 120mm
    rear: 120mm low flow thermaltake
    bottom: Ikonic 120mm (PSU)
    middle: corsair 120mm
    "side" (actually set on the bottom of the closed case facing at an angle): noctua 120mm (directed at video cards)


    Why are you recompiling LinX?

    My suggestion to you would be to use an NH-D14. If one fan stops turning your cpu/system should be safe.. if your pump quits on the water system, you may be in trouble. 32gb ram should be enough and a PCI-E cache card could be added to avoid use of an SSD for overflow.

    All along the watchtower the watchmen watch the eternal return.

  18. #68
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    The only thing I have ever used side fans for is to feed cooler air into the case and create positive pressure. I find most cases arn't designed well for providing vrm cooling. My fractal design arc midi case is the best I've had as the 180mm in top goes the entire width and pulls air across the vrm heatsinks.

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  19. #69
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Even if a case were designed for vertical cooling you still have stagnant areas and most parts of the motherboard receive very little crossflow nearer the PCB level anyways. About all you can do is put fans where they work best, and possibly throw in some ducting work if there's going to be an issue.

    All along the watchtower the watchmen watch the eternal return.

  20. #70
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by STEvil View Post
    Why are you recompiling LinX?

    My suggestion to you would be to use an NH-D14. If one fan stops turning your cpu/system should be safe.. if your pump quits on the water system, you may be in trouble. 32gb ram should be enough and a PCI-E cache card could be added to avoid use of an SSD for overflow.
    I'm not recompiling LinX. I'm compiling LINPACK. Why? Because then I would have control over the source code and also the compile options.

    Here are the temps for running 8 LINPACK processes without any optimizations. (compile with g77; straight, no additional flags at all).
    linpack.jpg

    The other reason for using a pure LINPACK benchmark is because the problem (Gauss Elimination) is very simple and very straightforward, which means I'd be able to test pure FP performance without much anything else running. I haven't gone to re-writing the Gauss elimination solver without partial pivoting in FORTRAN, but it might not be a bad idea since it would be so simple and basic to run. That and also when you control the solution path (at the underlying solver level), you would be able to tell a lot in terms of how the whole program runs, and you can basically control just about every single aspect of it.

    I had originally tested the OCZ RevoDrive 3 PCIe SSD card. Given the cost differential, it actually didn't perform that much better than the OCZ Vertex 3. Granted, I also wasn't putting it in a Gen 3 PCIe slot, but the Vertex 3 also wasn't on a 6 Gbps SATA controller either. And then the idea was dropped because it was ruled out as not being worth it for the blades.

    There's never enough RAM.

    Good point about the Noctua being a bit more fail-safe resistant. And the fact that it can perform quite well (nearly as well as the closed liquid kits solution), but man...that's a HUGE HSF.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  21. #71
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    Saskatoon (Canada)
    Posts
    1,568
    Quote Originally Posted by STEvil View Post
    Even if a case were designed for vertical cooling you still have stagnant areas and most parts of the motherboard receive very little crossflow nearer the PCB level anyways. About all you can do is put fans where they work best, and possibly throw in some ducting work if there's going to be an issue.
    In the ARC case with front, side, top, and back fans; after days of crunching I can open up the case and the only component warm to the touch is the video card. Although it isn't the quietest of all my builds so for those wishing silence there would still be stagnant air in the case somewhere.

    Yin|Gigabyte GA-Z68X-UD5-B3|Swiftech XT -> GTX240 -> DDC+ w/ Petra's|2600K @ 5.0GHz @1.368V |4 x 4 GB G.Skill Eco DDR3-1600-8-8-8-24|Asus DirectCUII GTX670|120 GB Crucial M4|2 x 2 TB Seagate LP(Raid-0)|Plextor 755-SA|Auzentech Prelude 7.1|Seasonic M12-700|Lian-Li PC-6077B (Heavily Modded)

    Squire|Shuttle SD36G5M| R.I.P.

  22. #72
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by alpha754293 View Post
    I had originally tested the OCZ RevoDrive 3 PCIe SSD card. Given the cost differential, it actually didn't perform that much better than the OCZ Vertex 3. Granted, I also wasn't putting it in a Gen 3 PCIe slot, but the Vertex 3 also wasn't on a 6 Gbps SATA controller either. And then the idea was dropped because it was ruled out as not being worth it for the blades.
    What do you mean by "it actually didn't perform that much better than the OCZ Vertex 3."?

    The RevoDrive 3 is designed for PCIe Gen 2, and with the 4-lane connector, the bus would have given it 2GB/s to work with, it should have been plenty over the ~1GB/s it is specified for in both read/write or ~1.5 GB/s for the "X2" version.

    Now, the Vertex 3 would have been easily limited by a 3 Gbps (or less) SATA controller. ...rated about 550 MB/s read and 500 MB/s write, you would have been stuck at about 280 / 2xx with a SATA 2 3 Gbps controller. Really though, that would have meant that your RevoDrive 3 was about 3.5x faster...

    If your program wasn't constantly reading and writing to the disk, especially in large chunks, it is understandable why you didn't see a performance increase, but that was something someone should look at before spending money on parts like that. The same goes for RAM (I'd explained this, with a premise already) and GPU. If your program can be GPU accelerated, you would obviously want a GPU with the most performance that is compatible with your program. If you just need an image on the screen, pick the cheapest discrete or use integrated graphics if available. (As far as you being wrong when I asked why you wanted it overclocked, you danced around that subject and focused on the fact that I had assumed intel put integrated graphics into Sandy-Bridge E and not just Sandy Bridge. You never commented on the overclocking though, you didn't want to make yourself look incorrect...)

    The only use I could find with a side fan too was to create positive pressure within the case (just like Bun-Bun.) ...I linked you a video showing how much impact positive pressure could have on the direction of airflow within the case but you were too focused on the angle of the nozzle before the case fans drew the "smoke" inside. (So you could prove me wrong and switch to a subject outside of what we were talking about)
    Last edited by BeepBeep2; 03-27-2012 at 01:19 PM.
    Smile

  23. #73
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by BeepBeep2 View Post
    What do you mean by "it actually didn't perform that much better than the OCZ Vertex 3."?

    The RevoDrive 3 is designed for PCIe Gen 2, and with the 4-lane connector, the bus would have given it 2GB/s to work with, it should have been plenty over the ~1GB/s it is specified for in both read/write or ~1.5 GB/s for the "X2" version.

    Now, the Vertex 3 would have been easily limited by a 3 Gbps (or less) SATA controller. ...rated about 550 MB/s read and 500 MB/s write, you would have been stuck at about 280 / 2xx with a SATA 2 3 Gbps controller. Really though, that would have meant that your RevoDrive 3 was about 3.5x faster...

    If your program wasn't constantly reading and writing to the disk, it is understandable why you didn't see a performance increase, but that was something someone should look at before spending money on parts like that. The same goes for RAM (I'd explained this, with a premise already) and GPU. If your program can be GPU accelerated, you would obviously want a GPU with the most performance that is compatible with your program. If you just need an image on the screen, pick the cheapest discrete or use integrated graphics if available. (As far as you being wrong when I asked why you wanted it overclocked, you danced around that subject and focused on the fact that I had assumed intel put integrated graphics into Sandy-Bridge E and not just Sandy Bridge. You never answered my question though...)

    The only use I could find with a side fan too was to create positive pressure within the case (just like Bun-Bun.) ...I linked you a video showing how much impact positive pressure could have on the airflow within the case but you were too focused on the angle of the nozzle before the case fans drew the "smoke" inside the case.
    See this thread: http://www.xtremesystems.org/forums/...ing&highlight=

    I was using (or trying to) those drives as swap.

    62.3 MB/s for the Vertex 3.
    78.2 MB/s for the RevoDrive 3.

    25.4% difference, and at the time that the testing was conducting, the price difference was about $100-150.

    I've already told you that the programs I'm using aren't GPU capable. And in this particular instance, I'm not CPU bound. My bottleneck is because of the swap file that it creates/writes.

    After that round of testing, I ended up putting an Intel 520 and a Vertex 3. I don't think that I've published the results here. And then I moved on to doing another test where I was running another simulation which results in a 10 GB temporary file (written during the course of the solving the problem). I forget what the results were for that.

    The biggest swap file that I've EVER gotten was 90 GB (on a system that had 128 GB of RAM). At that point, swap performance trumps pretty much every other performance metric you can find.

    Forgive me, but can you recapitulate the question? I forget what it was that you were asking. (I thought that I had answered it when I said that I was hoping that I wouldn't need a discrete graphics card if there was onboard video, but I must have been confusing the 2xxx series with the 3xxx series. *shrug* mehhh...it happens. That might also be the result of coming in from the server/HPC side of things where there's usually like an ATi Rage (or whatever the current generation of that is) or a Matrox G200 onboard. Sorry.)

    Actually, I did watch that video. Didn't he have the side of the case off??? I don't recall seeing a side fan at all actually, but once again, I could be mistaken.

    *edit*
    fortress.jpg
    You're talking about this video, right? I'm an idiot. Where's the sidefan???
    Last edited by alpha754293; 03-27-2012 at 01:25 PM.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  24. #74
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by alpha754293 View Post
    See this thread: http://www.xtremesystems.org/forums/...ing&highlight=

    I was using (or trying to) those drives as swap.

    62.3 MB/s for the Vertex 3.
    78.2 MB/s for the RevoDrive 3.

    25.4% difference, and at the time that the testing was conducting, the price difference was about $100-150.

    I've already told you that the programs I'm using aren't GPU capable. And in this particular instance, I'm not CPU bound. My bottleneck is because of the swap file that it creates/writes.

    After that round of testing, I ended up putting an Intel 520 and a Vertex 3. I don't think that I've published the results here. And then I moved on to doing another test where I was running another simulation which results in a 10 GB temporary file (written during the course of the solving the problem). I forget what the results were for that.

    The biggest swap file that I've EVER gotten was 90 GB (on a system that had 128 GB). At that point, swap performance trumps pretty much every other performance metric you can find.

    Forgive me, but can you recapitulate the question? I forget what it was that you were asking. (I thought that I had answered it when I said that I was hoping that I wouldn't need a discrete graphics card if there was onboard video, but I must have been confusing the 2xxx series with the 3xxx series. *shrug* mehhh...it happens. That might also be the result of coming in from the server/HPC side of things where there's usually like an ATi Rage (or whatever the current generation of that is) or a Matrox G200 onboard. Sorry.)

    Actually, I did watch that video. Didn't he have the side of the case off??? I don't recall seeing a side fan at all actually, but once again, I could be mistaken.
    Those are quite interesting results with the speed when using the drives as swap space ...I wonder how a RAM drive (can be created in software provided you have enough ram) would suffice in this type of situation.

    EDIT: "could" have

    You had first asked if there were any boards for Socket LGA 2011 that had onboard video and could be overclocked, I told you that all integrated graphics were now on the CPU. I should not have assumed LGA 2011 had integrated graphics on its processor lineup just like its earlier sibling, LGA 1155. Anyway, I said
    5. As far as the onboard video, that is on the CPU now. I don't know why you would want to overclock that though, especially if you will be using a discrete GPU. In computing performance comparison, it would be like pitting a 64-thread server with an Intel Atom.
    I only assumed at that time, that you would want to overclock the IGP in only the case that your program took advantage of any computational abilites. Hence, the last sentence.

    I understand that it wasn't a question, but if it were, it would have been "Why would you want to overclock an integrated GPU?"...anyway, it still deserved a mention in reply.

    Your reply was as follows:
    5. From what I've been able to find out after posting that, Socket R doesn't even have ANY on-board GPU available. Why would I want to do that? Because there's no point of putting in a discrete GPU when the system is going to be ultimately going headless. And I'll only need that if stuff goes wrong.
    ...as you can see, it wasn't exactly the type of reply I would have expected in response to my earlier statement/questioning. Actually, it seems that you answered me, but you answered somewhat incorrectly to what I asked.

    EDIT:
    As far as the video, I need to check again. The idea here though is that a sidefan would help push more air into the case instead of trying to pull it out, creating positive pressure.

    EDIT #2:
    It seems that the case is closed, but in the second half of the video they remove the rear exhaust fan. It doesn't matter if it includes a side fan or not, we're talking about positive or negative pressure, aren't we? Certainly another intake would help create positive pressure within the case.

    Here is the site article that goes with it though, showing how it can reduce dead spots in airflow...
    http://www.silverstonetek.com/techta...itive&area=usa
    Last edited by BeepBeep2; 03-27-2012 at 02:12 PM.
    Smile

  25. #75
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    BeepBeep2:

    Uhh...tried it with a RAMdrive. Surprisingly, it did even worse. (I don't think that I've published those results here either). Like...a LOT worse. Probably because it was a "software" drive kinda-dealio.

    It would likely be different with differen RAMdisk softwares and so I only tried one. And the system that I was testing it on only had 8 GB of RAM, so I only gave it half of that.

    I would also think that if I were to do this kind of test in Linux that I would end up with a different result, but I don't know if Linux will actually let me put the swap partition on a RAMdrive. *shrug* Probably, but might take some finessing with the installation bootloader in order to make it play nice with that, so I didn't spend too much time with that.

    (The original intent with the system that had 128 GB of RAM was to allocate half of it as a RAMdrive/swap.)

    I never actually asked if the onboard video can be overclocked. I just asked if I can overclock a LGA2011 CPU that came with onboard video. And yes, there is a difference between the two where the former is (CPU+GPU)*overclock whereas the latter is (CPU*overclock)+GPU. totalllly different things.

    I thought that with some of those IGPs that they were on it's own separate bus/die or something in that you can OC the CPU indepdentent of the GPU. *shrug* (Could be wrong/mistaken again. Or that might be old news with older processors.) I don't follow the Sandy Bridge development that closely. But by another token, it doesn't really matter because the 3xxx series doesn't even have an IGP.

    I was looking for onboard video since I already knew that this was going to run headless, so didn't really see the point of getting a dedicated GPU if I'm not going to use it. (Most of my other systems now also run headless.)

    whoops...haha....I thought that I had answered that, but that must have been in the post that got lost because I clicked on the attachment link in the same window instead of spawning a new tab for it. my bad. No, none of my programs can really use GPUs. (One sorta can, but it's limited in size and scope of problems for now. And as far as I know, it's Tesla only (*with the fine print being it may work on consumer class GPUs, but I have no documentation for it because it's not officially supported that way).)

    Regardless, I had no intention of wanting to OC the GPU (discrete or othrewise) at all.

    From the screenshot that I took above, it doesn't look like that the case is closed OR that there's a side fan at all. But like I said, if you can point it out to me, that'd be much appreciated. (I must be going blind cuz I can't seem to find it as there's nothing in that picture that resumes a side fan.)

    "Certainly another intake would help create positive pressure within the case."

    Uhh....not necessarily. (skipping over the conservation of mass (skipping over the time derivative as well) proof.)

    Nice animations. Ummm....don't think that it's quite that simplistic though...don't know if that's entirely accurate either, but ok.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

Page 3 of 4 FirstFirst 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •