Page 20 of 42 FirstFirst ... 101718192021222330 ... LastLast
Results 476 to 500 of 1035

Thread: The official GT300/Fermi Thread

  1. #476
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Fermi is an architecture, not a card. Think Nehalem vs i7 920.

    Use it as they see fit is nonsense. A product only needs to work as the manufacturer intends it to.

  2. #477
    Xtremely Kool
    Join Date
    Jul 2006
    Location
    UK
    Posts
    1,875
    Quote Originally Posted by trinibwoy View Post
    Fermi is an architecture, not a card. Think Nehalem vs i7 920.

    Use it as they see fit is nonsense. A product only needs to work as the manufacturer intends it to.
    You know what the GP part of the GPGPU means & NV saying its more important than the CPU & will do the work of the CPU but the CPU does not have vendor GPU render limitations.

  3. #478
    Xtremely Kool
    Join Date
    Jul 2006
    Location
    UK
    Posts
    1,875
    Quote Originally Posted by Chumbucket843 View Post
    cpu physx is WAY slower than gpu physx.
    Naah you don't say.

  4. #479
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Final8ty View Post
    Naah you don't say.
    why would you want to run physx on a cpu? thats like saying i want to do HPC tasks on a netbook.

    hmmm....
    http://www.xtremesystems.org/forums/...6&postcount=12

  5. #480
    Xtremely Kool
    Join Date
    Jul 2006
    Location
    UK
    Posts
    1,875
    Quote Originally Posted by Chumbucket843 View Post
    why would you want to run physx on a cpu? thats like saying i want to do HPC tasks on a netbook.

    hmmm....
    http://www.xtremesystems.org/forums/...6&postcount=12

    Which has nothing todo with my point about communication..

  6. #481
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Chumbucket843 View Post
    why would you want to run physx on a cpu? thats like saying i want to do HPC tasks on a netbook.

    hmmm....
    http://www.xtremesystems.org/forums/...6&postcount=12
    Not that one would want to or not, however, PhysX use to run on the CPU almost exclusively. Ageia develop physx hardware addin card for physx acceleration, but supported non-addin HW (CPU exclusvie, software) with their API, which has been used in many game engines, in fact, PhysX API will still run on the CPU in the absence of an nVidia GPU, you really have no choice but to do so in such a situation.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  7. #482
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by Chumbucket843 View Post
    [single core] cpu physx is WAY slower than gpu physx.
    Revised. We really don't know much about how a well-optimized CPU implementation of PhysX would run.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  8. #483
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Dont quote me on this, but Havok and "physics" has been around since 2000.

    I dont understand this obsession with physics that HAS TO run on GPU.
    Remember HL2 gravity gun, various see-saw and floating barrel puzzles?
    What about Crysis, destructible buildings, and the way vehicles tumble when punched.
    And many games have had waving cloth for years.

    Recall that GPU physics was already around on X1800XT. Also remember those physics PCIE cards for Ghost Recon2.

    Long long ago, in galaxy far far away, games like Unreal Tournament supported D3D, OpenGL, Glide, Metal etc. Lots of games had speedups with 3DNow! or MMX or SSE. But, nVidia PhysX doesn't just make things faster, but in games like Mirror's Edge, adds more glass shards when glass breaks and waving banners/cloth. Well, you can certainly very easily do that on CPU (regardless of how slow, make it an option). I image some consumers feel shafted that they get incomplete game experience because of the video card choice they made years ago. If way back when 10 years ago, programmers could add support for multiple standards, why not today.

    PhysX is dead. Nvidia is supporting OpenCL. Apple, Intel, MS and AMD also supporting OpenCL. Move along, move along..

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  9. #484
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by ***Deimos*** View Post
    And many games have had waving cloth for years.
    Yeah, as a static animation. Not rendered in realtime or interactive.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  10. #485
    Xtreme Cruncher
    Join Date
    Jul 2006
    Posts
    1,374
    OpenCL is a programming/extension platform, but does not include the libraries and such needed to put physics on the GPU. Physx, in this sense, is the full package. Havok is the full package. These could potentially be integrated into OpenCL just fine with sufficient effort, but are for now limited to GPU and CPU, respectively. The game companies will use whatever platform integrates best with their engine and requires the least effort. Neither is going anywhere, they'll both be supported and used for the next few years, to what extent only time will tell.

    EDIT: The reason for GPU physics being promoted is because of speed. In theory, one could do more with physics than under normal circumstances with the CPU. Having Physx available for nvidia graphics is no different conceptually to ATI having eyefinity. Nvidia cards won't be able to split the picture to 3+ monitors like ATI will. It does put ATI at a disadvantage for certain games, but that is the way things go. Each company will be trying to promote their respective features and making them more useful as time goes on. Whether or not game devs will seek some sort of balance in these instances really comes down to how much they stand to gain from doing so.

    Quote Originally Posted by ***Deimos*** View Post
    Dont quote me on this, but Havok and "physics" has been around since 2000.

    I dont understand this obsession with physics that HAS TO run on GPU.
    Remember HL2 gravity gun, various see-saw and floating barrel puzzles?
    What about Crysis, destructible buildings, and the way vehicles tumble when punched.
    And many games have had waving cloth for years.

    Recall that GPU physics was already around on X1800XT. Also remember those physics PCIE cards for Ghost Recon2.

    Long long ago, in galaxy far far away, games like Unreal Tournament supported D3D, OpenGL, Glide, Metal etc. Lots of games had speedups with 3DNow! or MMX or SSE. But, nVidia PhysX doesn't just make things faster, but in games like Mirror's Edge, adds more glass shards when glass breaks and waving banners/cloth. Well, you can certainly very easily do that on CPU (regardless of how slow, make it an option). I image some consumers feel shafted that they get incomplete game experience because of the video card choice they made years ago. If way back when 10 years ago, programmers could add support for multiple standards, why not today.

    PhysX is dead. Nvidia is supporting OpenCL. Apple, Intel, MS and AMD also supporting OpenCL. Move along, move along..
    Last edited by xVeinx; 10-04-2009 at 11:00 PM.

  11. #486
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    Quote Originally Posted by Chumbucket843 View Post
    its in a cuda demo. i can post a video of it on the gpu if you want proof. some of the stuff was from movies. i dont know why they put that in there.
    No mate, i doubt that is a cuda simulation. If it's really used as a promo video by nvidia, it's a lie, that's fumefx mate, i use it at work, it's exactly the same look and feel of a fumefx simulation.

    Here is an example so that you can see how similarly the teapot scene in your link looks to a fumefx simulation..

    http://www.youtube.com/watch?v=4CuJQZ78YQc

    And, also, i doubt nvidia would show scenes from movies claiming they did it on their gpu because.... than a lot of people would realize it's just a bunch of BS.

    For example, in that supposed demo, you can see scenes from Transformers 2 and Star Trek. Both movies had their VFX done by ILM (Industrial Light and Magic), which uses a pipeline centered around Maya/Renderman and CPU processing. That's maya simulations, maya fluids, which are calculated on CPU's, that is definitely not gpu simulations.

    I work in this field, that is not an nvidia presentation. If it actually is, it's a big bunch of lies and BS, like the Fermi fake card.

  12. #487
    Xtreme Addict
    Join Date
    Feb 2008
    Location
    America's Finest City
    Posts
    2,078
    Quote Originally Posted by Florinmocanu View Post
    No mate, i doubt that is a cuda simulation. If it's really used as a promo video by nvidia, it's a lie, that's fumefx mate, i use it at work, it's exactly the same look and feel of a fumefx simulation.

    Here is an example so that you can see how similarly the teapot scene in your link looks to a fumefx simulation..

    http://www.youtube.com/watch?v=4CuJQZ78YQc

    And, also, i doubt nvidia would show scenes from movies claiming they did it on their gpu because.... than a lot of people would realize it's just a bunch of BS.

    For example, in that supposed demo, you can see scenes from Transformers 2 and Star Trek. Both movies had their VFX done by ILM (Industrial Light and Magic), which uses a pipeline centered around Maya/Renderman and CPU processing. That's maya simulations, maya fluids, which are calculated on CPU's, that is definitely not gpu simulations.

    I work in this field, that is not an nvidia presentation. If it actually is, it's a big bunch of lies and BS, like the Fermi fake card.
    Then, the CTO of ILM is a liar and when he said that they use Nvidia GPUs for rendering things like particles... he was lying to all of us.
    Quote Originally Posted by FUGGER View Post
    I am magical.

  13. #488
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    their pipeline is based on maya/renderman. If he made such a statement, show me a link. And not just a statement. If he said such a thing, did he said which software did they use? What software did they used to generate the particles etc... If he just said it, without any details, it's just marketing. Lots of studios do such statements, to get free workstations, software, special treatment and support from companies etc..

  14. #489
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    OpenCL is really a very good std. I tried the OpenCL SDK from AMD and was surprised how good the SDK was. I tried to offload some work from CPU to GPP but it crashed when i tried Seems my skills are not good enough.

    Quote Originally Posted by 003 View Post
    Yeah, as a static animation. Not rendered in realtime or interactive.
    Not really static the thing did move, but it was more predictable and unrealistic.
    Coming Soon

  15. #490
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    About the GPU physics, my personal opinion is, right now, that I can't see the convenience of them. Then, I want to say it's not a completely solid opinion.

    For starters, I'm in love with the GPGPU idea, as a way to give 3D rendering hw a use further than the gaming and the 3D modelling. It's a shame than so much computing potential (and an architecture that could come in handy to solve certain problems, even more than the CPU one) is being wasted. GPGPU avoids that stupid situation.

    But: being the games a graphically bottlenecked kind of software (and it has been until now), carrying computing load from the CPU to the GPU seems a rather unnatural movement...

    Yeah, that architecture is suited to certain physics tasks better than the CPU one, and if you use that architecture to do that, you can do more complex calculations than on a CPU. But then what about graphics? If you want to not limit severely the graphics aspect of the game, you end up limited to do some light effects instead of the really complex effects you could achieve using most of the GPU power to do it, and then, that little computing load could be run on the free, wasted CPU while GPU is computing graphics, so in the end, you may end up improving performance by running physics on CPU. Even if not, the difference wouldn't be so huge and black against white as hyped, I think.

    Take the example of the so stale Batman AA. It has very good physics (that's what some people say) that make the GPGPU physics computing worth it. Except that if you remove the single thread constraint to run the physics multithreaded on a multi core CPU, the game runs fantastic with the same physics run on a CPU (that's what some other people say). And that's with a library made to sell the GPU acceleration (I'm sure similar -even if not exactly equal- visual effects could be achieved by using other different algorythms more suited to be run on a CPU instead with a huge performance gain).

    Now that open, non hw vendor limited GPU accelerated physics are coming (OpenCL GPU accelerated Havok, Bullet Physics and Pixelux DMM), the usual limitations of CUDA PhysX are going to dissappear and we should see much more GPU accelerated physics in games if it is really a win win situation. But to be sincere, I think that PhysX is more of a commercial movement by NVIDIA to sell CUDA (and therefore to sell NVIDIA cards), and the Open Physics Initiative is more of a commercial movement by AMD to avoid the competitor's PhysX commercial value.

    I'm a little skeptic about the GPGPU use in previously always graphically bottlenecked videogames arena, as you see. It just doesn't look like logical to me... maybe at some especific cases, yes, but not as the norm.

    Quote Originally Posted by 003 View Post
    Yeah, as a static animation. Not rendered in realtime or interactive.
    Do you really think you need to run a soft body on a GPU to not having to rely on pre-scripted animation? I can run on a single core P4 3GHz (yeah, Presscott) a Bullet Physics Soft Bodies demo with no problems and a high framerate, including a demo about several (dozens) sheets of soft material falling to the ground, getting entangled and so.
    Last edited by Farinorco; 10-05-2009 at 01:32 AM.

  16. #491
    Registered User
    Join Date
    Dec 2007
    Posts
    14
    I just wonder when NV plans on releasing the gt300 card and how they plan on launching them. ATI seems prepared with a few more cards coming out soon and I cant wait for to see what the X2 cards will be like. I know NV cards will be fast but it looks like it could be a tough start of the season for NV.

  17. #492
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    farinoco, go play the batman demo yourself, you can dl it from the nvidia server at high speed.
    the physics effects in it are NOT great at all... not at all...
    there are 1-2 dozen newspapers flying off the ground if you fight in some scenes, and its not realistic at all...
    there is volumetric fog, woooooa, havent seen that before :P
    and then there are some other effects that you really dont even notice if you dont play the same level several times with physix on and off...

    mirrors edge was still the best physix implementation so far, and even there it didnt really matter or change the gameplay...
    i wish theyd focus more on getting physics done right instead of fighting over WHAT processor to process it on

  18. #493
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Chumbucket843 View Post
    why would you want to run physx on a cpu? thats like saying i want to do HPC tasks on a netbook.

    hmmm....
    http://www.xtremesystems.org/forums/...6&postcount=12


    LOL...


    CPU physics are more powerful...! Where have you been? All that Nvidia's PhysX offers, is ancillary "fluff" (ie: glass breaking, tiles breaking, paper shuffling on the floor, etc), it's all superficial to actually whats going on in the game. Eye candy!


    That^^ puffery is not the type of physics we are asking for and demanding in games. Batman's/Mirror's Edge overdone, superficial physx, is not what we are discussing in this thread. Carmack, DICE, etc all have been using real physical environments using the CPU for YEARS...! UNO? actual physical objects. Like a piece of fuselage being turn off a fighter from AA, and having that land on the road in front of you, as you run it over in the jeep, only to have it kick up and kill the other in the jeep behind you....!

    We've had these real deformable objects in games for years. Developers just haven't been able to make heavy use of physics or the power to make full use of multi-threading yet. So that everything within a scene is basically it's own object.(bulldozer?). Just look at Battlefield 1943.. massive use of CPU physics! or (again) THIS video.


    Nvidia can't touch that!


    The reason nVidia is marketing flowing capes, ancillary paper, broken tiles and such, is because they know it would take quad-SLI to have real physics.

    The Intel Core i7 920 is only $240 folks... less than a GTX285. Think on it!


    PhysX is no different than Havoc, except Nvidia bought and started to support it minimally in their own video cards, so you didn't need a separate physics card... back when dual-core CPU's were just rumors. Now almost all of us have 8 threaded rigs...

    You would need tri-sli to equal what the i7 can do. (ie: Velocity physic engine video)
    Last edited by Xoulz; 10-05-2009 at 03:42 AM.

  19. #494
    Xtreme Member
    Join Date
    Oct 2004
    Posts
    171
    This is so embarrassing:
    http://www.fudzilla.com/content/view/15813/1/

    "Judging from we've learned in the last few days we are talking about a handful of boards if not even less than that."

    YIELDS ARE FINE!!! NOTHING TO SEE HERE...

    Edit: I want to note that I'm not laugthing about Nvidia being late (as this is not good for us costumers), but about fudzilla - swallowing every PR-bit they get from Nvidia - and in the end admitting that they are completely wrong.
    Last edited by mibo; 10-05-2009 at 06:19 AM.

  20. #495
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Xoulz View Post
    LOL...


    CPU physics are more powerful...! Where have you been? All that Nvidia's PhysX offers, is ancillary "fluff" (ie: glass breaking, tiles breaking, paper shuffling on the floor, etc), it's all superficial to actually whats going on in the game. Eye candy!
    they have a demo showing very nice water movements, just cause one game only uses a few things, does not mean you cant do something
    That^^ puffery is not the type of physics we are asking for and demanding in games. Batman's/Mirror's Edge overdone, superficial physx, is not what we are discussing in this thread. Carmack, DICE, etc all have been using real physical environments using the CPU for YEARS...! UNO? actual physical objects. Like a piece of fuselage being turn off a fighter from AA, and having that land on the road in front of you, as you run it over in the jeep, only to have it kick up and kill the other in the jeep behind you....!
    not very uncommon. go play the last level of HL:2, ragdoll at its funnest.
    We've had these real deformable objects in games for years. Developers just haven't been able to make heavy use of physics or the power to make full use of multi-threading yet. So that everything within a scene is basically it's own object.(bulldozer?). Just look at Battlefield 1943.. massive use of CPU physics! or (again) THIS video.


    Nvidia can't touch that!


    The reason nVidia is marketing flowing capes, ancillary paper, broken tiles and such, is because they know it would take quad-SLI to have real physics.

    The Intel Core i7 920 is only $240 folks... less than a GTX285. Think on it!
    that video used all 8 threads of the processor, expect the average person to be on duel cores or quads, and expect the game already takes up 50-80% (depends on cores and how cpu limited the game is and at what settings they play) and whats left is a cpu 2-4x weaker than an i7, using up 2/3 of its power before physics, and the result is a average person can do only about 1/10th of what was shown while playing a game.

    PhysX is no different than Havoc, except Nvidia bought and started to support it minimally in their own video cards, so you didn't need a separate physics card... back when dual-core CPU's were just rumors. Now almost all of us have 8 threaded rigs...

    You would need tri-sli to equal what the i7 can do. (ie: Velocity physic engine video)
    so much miss information. not all of us have 8 threaded rigs, i7 was 1% of intels cpu sales last year. and how many of us have a 2P amd rig? like 3 or 4 of us? and where do you see a real comparison between cpu and gpu physics, where some unbiased party did the review? pls show us where this tri-sli statement comes from.

    so far most of your posts are full of information with no sources and never backed up. go take a look at what real GPU demos can do.

    for other points, they need to start making physics scale properly. at what point do i care about how cool a flag waves, vs having 60fps locked. a good physics engine should know how to load balance properly so we can decide its importance. weve been gpu limited, cpu limited, and now were gonna see physics limiting framerates and the only solution is to drop the quality and replay the map (unacceptable in my opinion)

  21. #496
    Xtreme Addict
    Join Date
    Jul 2002
    Location
    [M] - Belgium
    Posts
    1,744
    PhysX will become a success the moment it is required and used for a gameplay changing implementation; as long as it remains good enough for "fluff" (aka visual effects without gameplay effect); chances are very low of using "physx" as PRO for NVIDIA hardware.


    Belgium's #1 Hardware Review Site and OC-Team!

  22. #497
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by saaya View Post
    mirrors edge was still the best physix implementation so far, and even there it didnt really matter or change the gameplay...
    What?? I have Mirrors Edge and have seen demos of Batman: AA and the batman physX looks way cooler than mirrors edge. In ME, all you get are some hanging flaps and random tarps. Batman: AA actually adds to the environment.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  23. #498
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    Quote Originally Posted by Manicdan View Post
    they have a demo showing very nice water movements, just cause one game only uses a few things, does not mean you cant do something

    not very uncommon. go play the last level of HL:2, ragdoll at its funnest.

    that video used all 8 threads of the processor, expect the average person to be on duel cores or quads, and expect the game already takes up 50-80% (depends on cores and how cpu limited the game is and at what settings they play) and whats left is a cpu 2-4x weaker than an i7, using up 2/3 of its power before physics, and the result is a average person can do only about 1/10th of what was shown while playing a game.


    so much miss information. not all of us have 8 threaded rigs, i7 was 1% of intels cpu sales last year. and how many of us have a 2P amd rig? like 3 or 4 of us? and where do you see a real comparison between cpu and gpu physics, where some unbiased party did the review? pls show us where this tri-sli statement comes from.

    so far most of your posts are full of information with no sources and never backed up. go take a look at what real GPU demos can do.

    for other points, they need to start making physics scale properly. at what point do i care about how cool a flag waves, vs having 60fps locked. a good physics engine should know how to load balance properly so we can decide its importance. weve been gpu limited, cpu limited, and now were gonna see physics limiting framerates and the only solution is to drop the quality and replay the map (unacceptable in my opinion)
    I think your missing the point mate. The thing with evolution of hardware and software is that you always try do the best. If developers did only things that can run perfectly on present hardware, we would have a stalemate, things would not evolve.

    We, the users, want new cool things, things which push us to upgrade. Remember FEAR 1? Or oblivion? Or Doom3? Or Crysis. Those are aplication which barely run or have run on average joe hardware and they pushed hardware evolution.
    I think it's safe to assume that doing really complex physics to really stress a CPU or GPU would only do good to this industry, but, doing it on a CPU guarantees that everyone can take advantage of those effects, while doing it in a closed API, like Physx, means only people with nvidia cards get to benefit.

    I don't mind nvidia keeping it as a closed API, it's good for them to do that and kudos to them for that, but personally i like open standards, standards that do not force me to buy a specific hardware to enjoy them. That's the fun about PC's anyway, you have millions of configurations, but the software runs on all of them.

  24. #499
    Xtreme Addict
    Join Date
    Feb 2004
    Posts
    1,176

    Thumbs up

    Quote Originally Posted by Florinmocanu View Post
    I think your missing the point mate. The thing with evolution of hardware and software is that you always try do the best. If developers did only things that can run perfectly on present hardware, we would have a stalemate, things would not evolve.

    We, the users, want new cool things, things which push us to upgrade. Remember FEAR 1? Or oblivion? Or Doom3? Or Crysis. Those are aplication which barely run or have run on average joe hardware and they pushed hardware evolution.
    I think it's safe to assume that doing really complex physics to really stress a CPU or GPU would only do good to this industry, but, doing it on a CPU guarantees that everyone can take advantage of those effects, while doing it in a closed API, like Physx, means only people with nvidia cards get to benefit.

    I don't mind nvidia keeping it as a closed API, it's good for them to do that and kudos to them for that, but personally i like open standards, standards that do not force me to buy a specific hardware to enjoy them. That's the fun about PC's anyway, you have millions of configurations, but the software runs on all of them.
    Completely agree.

  25. #500
    Xtreme Legend
    Join Date
    Sep 2002
    Location
    Finland
    Posts
    1,707
    7 years ago NVIDIA tested packed GPUs from Fab like this:





    We know Fermi is a working silicon:



    So I wonder how does today's Fermi test board look like? Does it really have a lot of wires sticking out, as well as test modules, and looks like a character from Terminator like Fudo describes or is it something similar to year 2002's test board
    Favourite game: 3DMark
    Work: Muropaketti.com - Finnish hardware site
    Views and opinions about IT industry: Twitter: sampsa_kurri

Page 20 of 42 FirstFirst ... 101718192021222330 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •