Page 5 of 7 FirstFirst ... 234567 LastLast
Results 101 to 125 of 153

Thread: Lucid Hydra on Smackover 2

  1. #101
    Xtreme Member
    Join Date
    Mar 2008
    Location
    germany-münster
    Posts
    375
    well, that problem can be solved...

    lucid simply has to make a mini bench for each supported game(or certain effects etc.), and let it run through each installed graphics card (as long as the game is installed... saves time); databases would ignore OC, so thats a no-go

    the bench can determine, which card gets which parts of the expected renderings (frames) (ideally those in which its faster than the other installed cards or which are actually supported by the card. like a dx9 card getting all dx9 effects in a dx11 game, as long as the dx11 card cant handle parts of the dx9 code faster, by itself), packing it all inside some kind of log.

    then, every single frame has to be analyzed and split up onto the cards, rendered and put together again by hydra.
    quite easy doing that with a small scene and with two identical cards (like in the demo. i guess they split up the scene manually, though i hope they didnt), but executing what i described would be quite a bit more demanding

    so doing it right is massive work (for the chip and the developers), but with a little game, OS and hardware developers help ... possible
    system:

    Phenom II 920 3.5Ghz @ 1.4v, benchstable @ over 3,6Ghz (didnt test higher)
    xigmatek achilles
    sapphire hd4870 1gb @ 820 1020
    Gigabyte GA-MA790GP-DS4H
    8gb a-data 4-4-4-12 800
    x-fi xtrememusic
    rip 2x 160gb maxtor(now that adds up to 4...)
    320gb/250gb/500gb samsung

  2. #102
    Xtreme Enthusiast
    Join Date
    Jul 2007
    Location
    Phoenix, AZ
    Posts
    866
    Lets all give up guys.


    We will admit, the chip is a big failure, the company just wants to go bankrupt, Intel is being lied to and so are we, they are doing this the SHOW the world how stupid we are, we get it, the end.

    Whats so wrong with having faith, how did faith turn into a bad thing, you can go on and on about how much more knowledge you have about multi GPU, but for some reason I am guessing that Lucid knows just a little more than some of you.
    Last edited by Decami; 12-25-2008 at 01:49 PM.
    This post above was delayed 90 times by Nvidia. Cause that's their thing, thats what they do.
    This Announcement of the delayed post above has been brought to you by Nvidia Inc.

    RIGGY
    case:Antec 1200
    MB: XFX Nforce 750I SLI 72D9
    CPU:E8400 (1651/4x9) 3712.48
    MEM:4gb Gskill DDR21000 (5-5-5-15)
    GPU: NVIDIA GTX260 EVGA SSC (X2 in SLI) both 652/1403
    PS:Corsair 650TX
    OS: Windows 7 64-bit Ultimate
    --Cooling--
    5x120mm 1x200mm
    Zalman 9700LED
    Displays: Samsung LN32B650/Samsung 2243BWX/samsung P2350


  3. #103
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Decami View Post
    Lets all give up guys.


    We will admit, the chip is a big failure, the company just wants to go bankrupt, Intel is being lied to and so are we, they are doing this the SHOW the world how stupid we are, we get it, the end.

    Whats so wrong with having faith, how did faith turn into a bad thing, you can go on and on about how much more knowledge you have about multi GPU, but for some reason I am guessing that Lucid knows just a little more than some of you.
    Who said its a failure. It might just not be the miracle some wants it too. Personally I see this shine in consoles. It screams for locked/controlled HW to work right.
    Crunching for Comrades and the Common good of the People.

  4. #104
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by clonez View Post
    well, that problem can be solved...

    lucid simply has to make a mini bench for each supported game(or certain effects etc.), and let it run through each installed graphics card (as long as the game is installed... saves time); databases would ignore OC, so thats a no-go

    the bench can determine, which card gets which parts of the expected renderings (frames) (ideally those in which its faster than the other installed cards or which are actually supported by the card. like a dx9 card getting all dx9 effects in a dx11 game, as long as the dx11 card cant handle parts of the dx9 code faster, by itself), packing it all inside some kind of log.

    then, every single frame has to be analyzed and split up onto the cards, rendered and put together again by hydra.
    quite easy doing that with a small scene and with two identical cards (like in the demo. i guess they split up the scene manually, though i hope they didnt), but executing what i described would be quite a bit more demanding

    so doing it right is massive work (for the chip and the developers), but with a little game, OS and hardware developers help ... possible
    No new technology should be based on profiling. That's a total hack and requires a massive amount of support to maintain. If we're talking about a truly good technology here it shouldn't need this kind of crap. SLI and CF already use profiles. You really want more of those on top of the existing ones?

    Quote Originally Posted by Decami View Post
    Lets all give up guys.


    We will admit, the chip is a big failure, the company just wants to go bankrupt, Intel is being lied to and so are we, they are doing this the SHOW the world how stupid we are, we get it, the end.

    Whats so wrong with having faith, how did faith turn into a bad thing, you can go on and on about how much more knowledge you have about multi GPU, but for some reason I am guessing that Lucid knows just a little more than some of you.
    No need to be sarcastic. The point is that this isn't faith.. this is almost rampant cheesy "LOLZ TAKE THAT ATI AND NVIDIA". I'm just saying if people are not careful with that kind of uneducated blind faith, they'll end up with pie on their faces.
    Last edited by Sr7; 12-25-2008 at 02:04 PM.

  5. #105
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by Shintai View Post
    Personally I see this shine in consoles. It screams for locked/controlled HW to work right.
    Didnt think of it like that but thinking about it I agree 100%
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  6. #106
    Xtreme Enthusiast
    Join Date
    Mar 2007
    Location
    Portsmouth, UK
    Posts
    963
    Quote Originally Posted by Shintai View Post
    Who said its a failure. It might just not be the miracle some wants it too. Personally I see this shine in consoles. It screams for locked/controlled HW to work right.
    Don't forget Apple, they'll probably find a place for it in their lineup.

  7. #107
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Quote Originally Posted by Shintai View Post
    Because it would need complicated data to do so for each single chip and generation. In short, it would need to have the latest driver to do so. And no, it cant just extract it from the GFX driver.

    People are starting to overhype this to make it "fit".
    GPU-Z seems to be able to do just what you are suggesting cannot be done. Run a software program in the drivers, it keeps a database up to date and changes settings in the hardware in a manner ike WCPREDIT is able to do for chipsets.

    Database can have generic settings and abilities, Lucid can tweak it based on benchmarks as needed and heck, maybe we'll even get a front end to try tweaking it ourselves.

    Why are you guys taking the hard way on this?

    All along the watchtower the watchmen watch the eternal return.

  8. #108
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by STEvil View Post
    GPU-Z seems to be able to do just what you are suggesting cannot be done. Run a software program in the drivers, it keeps a database up to date and changes settings in the hardware in a manner ike WCPREDIT is able to do for chipsets.

    Database can have generic settings and abilities, Lucid can tweak it based on benchmarks as needed and heck, maybe we'll even get a front end to try tweaking it ourselves.

    Why are you guys taking the hard way on this?
    Again naive, this is not going to work. You can't just sit there and go "there are x of these units, therefore it will be very good at this". The efficiency of the units comes into play, the different internal clock domains, the different perf bottlenecks and performance issues specific to each chip... there's just no way they can get the insight into each chip they need.

    Sorry, a generic database of stats won't cut it.

  9. #109
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by STEvil View Post
    GPU-Z seems to be able to do just what you are suggesting cannot be done. Run a software program in the drivers, it keeps a database up to date and changes settings in the hardware in a manner ike WCPREDIT is able to do for chipsets.

    Database can have generic settings and abilities, Lucid can tweak it based on benchmarks as needed and heck, maybe we'll even get a front end to try tweaking it ourselves.

    Why are you guys taking the hard way on this?
    I think you fail to see the issue. It might know what each GFX card have etc.

    But as Sr7 said, it just downt know how good etc those things are. Specially not in different games.

    Secondly. The PCIe bus sends very very propertarian data. There is no OpenGL, DirectX etc calls. It will be nVidia data and AMD data. Most likely massively different from generation to generation. And some different in driver to driver.

    Its a massive nightmare with all the different HW. It only gonna work if you got something more or else preselected.

    Even in the BEST of the BEST cases. Your Hydra 100 chip would already fail with GPUs after its release. And most likely even drivers. So when you want a pair of those 300GTX series or HD5800. Then you also want a new MB with a new hydra chip.

    For this to work, Lucid wants AMD and nVidia to adopt it. You can call this your new SLI tax. Abeit actually doing something.
    Crunching for Comrades and the Common good of the People.

  10. #110
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Shintai View Post
    I think you fail to see the issue. It might know what each GFX card have etc.

    But as Sr7 said, it just downt know how good etc those things are. Specially not in different games.

    Secondly. The PCIe bus sends very very propertarian data. There is no OpenGL, DirectX etc calls. It will be nVidia data and AMD data. Most likely massively different from generation to generation. And some different in driver to driver.

    Its a massive nightmare with all the different HW. It only gonna work if you got something more or else preselected.

    Even in the BEST of the BEST cases. Your Hydra 100 chip would already fail with GPUs after its release. And most likely even drivers. So when you want a pair of those 300GTX series or HD5800. Then you also want a new MB with a new hydra chip.

    For this to work, Lucid wants AMD and nVidia to adopt it. You can call this your new SLI tax. Abeit actually doing something.
    I'm sure they are aware of these set-backs.
    What they are really trying to sell/protect is their software, which obviously CAN be updated every so often.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  11. #111
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,356
    You heard it here first gentlemen, it's impossible because it's hard.

    Brilliant logic.

  12. #112
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by Sly Fox View Post
    You heard it here first gentlemen, it's impossible because it's hard.

    Brilliant logic.
    It's called too many variables to control the platform. On top of common sense, since NV and AMD can't get the right. If this were a wise engineering decision it would've been done by now.

  13. #113
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Quote Originally Posted by Sr7 View Post
    Again naive, this is not going to work. You can't just sit there and go "there are x of these units, therefore it will be very good at this". The efficiency of the units comes into play, the different internal clock domains, the different perf bottlenecks and performance issues specific to each chip... there's just no way they can get the insight into each chip they need.

    Sorry, a generic database of stats won't cut it.
    www.hwbot.org

    Quote Originally Posted by Shintai View Post
    I think you fail to see the issue. It might know what each GFX card have etc.

    But as Sr7 said, it just downt know how good etc those things are. Specially not in different games.

    Secondly. The PCIe bus sends very very propertarian data. There is no OpenGL, DirectX etc calls. It will be nVidia data and AMD data. Most likely massively different from generation to generation. And some different in driver to driver.

    Its a massive nightmare with all the different HW. It only gonna work if you got something more or else preselected.

    Even in the BEST of the BEST cases. Your Hydra 100 chip would already fail with GPUs after its release. And most likely even drivers. So when you want a pair of those 300GTX series or HD5800. Then you also want a new MB with a new hydra chip.

    For this to work, Lucid wants AMD and nVidia to adopt it. You can call this your new SLI tax. Abeit actually doing something.
    www.hwbot.org

    I'm sure they can use those numbers plus some in-house numbers plus guestimating given the price point/naming putting the cards into a certain performance envelope. Really, really basic stuff and not hard at all.

    edit - and i'm sure if this works half decently that nV/ATi will be happy to talk with them about what features their card would excel at..
    Last edited by STEvil; 12-26-2008 at 04:42 PM.

    All along the watchtower the watchmen watch the eternal return.

  14. #114
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by STEvil View Post
    www.hwbot.org



    www.hwbot.org

    I'm sure they can use those numbers plus some in-house numbers plus guestimating given the price point/naming putting the cards into a certain performance envelope. Really, really basic stuff and not hard at all.

    edit - and i'm sure if this works half decently that nV/ATi will be happy to talk with them about what features their card would excel at..
    Sigh... some people just don't seem to get it.

    You *CAN NOT* just look at overall 3dmark scores or performance in any one game because the rendering loops and the parts of the pipe that get stressed are completely different game to game. Overall card A might have half the general performance of card B, but specifically it might be 1/8 as fast at texturing.

    Since you're not doing AFR and you're only maybe doing texturing on that card, you'd end up distributing work to that card as though it were able to do half the work, when in fact it can only do 1/8. You *can't* just use generic, high-level performance info for this type of work distribution if you want to see any scaling *at all*.

  15. #115
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    And you dont seem to be able to comprehend what i'm saying. I'm not basing the performance on one benchmark and i'm not saying they should limit themselves to one. I have also indicated they could use in-house means or whatever they want to decide how the beck-end application (should there be one) should split up the workload.

    All along the watchtower the watchmen watch the eternal return.

  16. #116
    Xtreme Enthusiast
    Join Date
    Jul 2007
    Location
    Phoenix, AZ
    Posts
    866
    Alot of you are coming off as my last post stated, it was sarcasm, yeah, but only in the fact that admitting you guys were right. Alot of you guys are coming off like you think Hydra is complete bullcrap, a lie...like its impossible.

    The sad thing is you guys bring some good points to the table, knowledgeable ones, but why even ask the questions if you think you already know the answers? Why even talk in this thread if you already know everything there is to know about this chip and its capabilities, and the fact that it will not work?

    To be honest, none of us are Lucid, and none of us really know exactly the chips capabilities, but some of the very obvious questions, guys, im shure they have looked at.

    This has been demoed...and demoed successfully I might add. You really think they would say its ready for release with major lag issues? You really think they would say its ready for release if color and quality issues existed like you think will? Do..you...really think they would say ready for release if it doesnt work?

    and you think some of these quality and color and lag issues might have been noticed by reviewers watching the demo.... What did Lucid do? Hold him at gun point?


    Saying hows the chip do this? or How is "this" possible? Or How do you get around this? is fine.

    But im sorry, saying..."So so is impossible", or, "Theres no way to do this" or "This cant exist cause if it were possible it would be done already" is more naive than the people believing what Lucid has said so far.
    Last edited by Decami; 12-27-2008 at 03:43 AM.
    This post above was delayed 90 times by Nvidia. Cause that's their thing, thats what they do.
    This Announcement of the delayed post above has been brought to you by Nvidia Inc.

    RIGGY
    case:Antec 1200
    MB: XFX Nforce 750I SLI 72D9
    CPU:E8400 (1651/4x9) 3712.48
    MEM:4gb Gskill DDR21000 (5-5-5-15)
    GPU: NVIDIA GTX260 EVGA SSC (X2 in SLI) both 652/1403
    PS:Corsair 650TX
    OS: Windows 7 64-bit Ultimate
    --Cooling--
    5x120mm 1x200mm
    Zalman 9700LED
    Displays: Samsung LN32B650/Samsung 2243BWX/samsung P2350


  17. #117
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Decami View Post
    Alot of you are coming off as my last post stated, it was sarcasm, yeah, but only in the fact that admitting you guys were right. Alot of you guys are coming off like you think Hydra is complete bullcrap, a lie...like its impossible.

    The sad thing is you guys bring some good points to the table, knowledgeable ones, but why even ask the questions if you think you already know the answers? Why even talk in this thread if you already know everything there is to know about this chip and its capabilities, and the fact that it will not work?

    To be honest, none of us are Lucid, and none of us really know exactly the chips capabilities, but some of the very obvious questions, guys, im shure they have looked at.

    This has been demoed...and demoed successfully I might add. You really think they would say its ready for release with major lag issues? You really think they would say its ready for release if color and quality issues existed like you think will? Do..you...really think they would say ready for release if it doesnt work?

    and you think some of these quality and color and lag issues might have been noticed by reviewers watching the demo.... What did Lucid do? Hold him at gun point?


    Saying hows the chip do this? or How is "this" possible? Or How do you get around this? is fine.

    But im sorry, saying..."So so is impossible", or, "Theres no way to do this" or "This cant exist cause if it were possible it would be done already" is more naive than the people believing what Lucid has said so far.
    Also rememebred they demoed one thing in a highly controlled situtation with 2 identical cards. Plus it was in a setup that was very far from a PC design.
    Crunching for Comrades and the Common good of the People.

  18. #118
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Shintai View Post
    I think you honestly make it up

    The chip would have no knowledge about the cards. And a new card would ruin it quickly.

    Personally I think Hydra will be good for consoles with locked HW. On a PC it will flop so hard it hurts. And as someone else pointed out. It smell of bitboys.

    Plus lag and shuttering issues got potential to be very bad. Also the Hydra chip as such doesnt do anything that AMD/nVidia couldnt do..if it worked. It would be much more elegant in drivers. Both for fixes and future compability with DX, OpenGl and OpenCL versions. And ofcourse new cards and such.

    Shintai,

    Hey bro... have you honestly made an effort to look at the technology and invest a bit of your time reading and understanding? They have 27 patents, some of the engineers behind this company are notable people within the industry.

    The Hydra chip does know about the said card on the bus. This was already mentioned in their press release. Take time to read! It is not out of the realm of probability that the drivers for the hydra chip know the texture units of the verity of cards plugged into a bus, and know it's over-all performance due to internal testing. So that when that card is on the bus, Lucid has already previously tested the algorithm.




    edit:
    Not to hard, similar to DDR memories SPD function.

    Whether or not it is true is another story...!




    .
    Last edited by Xoulz; 12-27-2008 at 06:03 AM.

  19. #119
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Xoulz View Post
    Shintai,

    Hey bro... have you honestly made an effort to look at the technology and invest a bit of your time reading and understanding? They have 27 patents, some of the engineers behind this company are notable people within the industry.

    The Hydra chip does know about the said card on the bus. This was already mentioned in their press release. Take time to read!

    Whether or not it is true is another story...!

    .
    Yes and I did since their demo. Also their release target is H1 2009.

    Wow 27 patents. Thats so ensuring...

    I could ask you the same thing if you even bothered to read anything. Because as I wrote before. This screams on locked/controlled HW.

    Also I think you miss an extra layer that Lucid SW would provide. Also new cards would instantly ruin any hardwired information in the chip. Or perhaps even its capabilities. Plus any driver changes. The HYDRA chip does NOT know how the shaders etc work. Thats only something nVidia and AMD knows.
    It might be easier for them to do it with Larrabee since its standard x86.

    People lost focus and now its starting to be hyped into madness. Lucid showed 2 8800GT cards running in a controlled environment on non standard HW with x software. It was also timedemos running. No user input.

    As said, using this with random x cards etc sounds like an epic nightmare that just wont work in reality.
    Last edited by Shintai; 12-27-2008 at 05:58 AM.
    Crunching for Comrades and the Common good of the People.

  20. #120
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Shintai View Post
    Yes and I did since their demo. Also their release target is H1 2009.

    Wow 27 patents. Thats so ensuring...

    I could ask you the same thing if you even bothered to read anything. Because as I wrote before. This screams on locked/controlled HW.

    Also I think you miss an extra layer that Lucid SW would provide. Also new cards would instantly ruin any hardwired information in the chip. Or perhaps even its capabilities. Plus any driver changes. The HYDRA chip does NOT know how the shaders etc work. Thats only something nVidia and AMD knows.

    People lost focus and now its starting to be hyped into madness. Lucid showed 2 8800GT cards running in a controlled environment on non standard HW with x software. It was also timedemos running. No user input.

    As said, using this with random x cards etc sounds like an epic nightmare that just wont work in reality.

    Your quick, {I edited my post a bit..!}

    Anyways, I really think your argument is lacking.
    Updating the Hydra chip can be as easy as firmware or even just a new driver, etc. I've never seen you be such a naysayer before and I'm somewhat dumbfounded as to your reluctancy to accept this technology for what it is.

    Nobody is saying it will work flawlessly, seeing this is first gen, but over a few years, it may be 100% scalability with all the problem you are suggesting removed.

    Just look at the shaky history of SLI... after 5 years, it still isn't resolved!




    .

  21. #121
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    I just dont think the Hydra chip is the golden saviour that some wants it to be. Also im not saying it wont work either. But it sure wont be universal.

    Hydra would work abit better if it was backed by AMD/nVidia. But I also doubt that some people from Lucid suddenly fixed what nVidia/3DFx/ATI/AMD have been working on for over 10 years. because the idea behind the Hydra chip is very simple. But the implementation is extremely complex. Also firmware update sounds easy. How much memory and flash will the Lucid chip have? 250-500MB?

    Again, this technology fits PERFECTLY in a console. Just not in a PC.
    Crunching for Comrades and the Common good of the People.

  22. #122
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Shintai View Post
    Yes and I did since their demo. Also their release target is H1 2009.

    Wow 27 patents. Thats so ensuring...

    I could ask you the same thing if you even bothered to read anything. Because as I wrote before. This screams on locked/controlled HW.

    Also I think you miss an extra layer that Lucid SW would provide. Also new cards would instantly ruin any hardwired information in the chip. Or perhaps even its capabilities. Plus any driver changes. The HYDRA chip does NOT know how the shaders etc work. Thats only something nVidia and AMD knows.
    It might be easier for them to do it with Larrabee since its standard x86.

    People lost focus and now its starting to be hyped into madness. Lucid showed 2 8800GT cards running in a controlled environment on non standard HW with x software. It was also timedemos running. No user input.

    As said, using this with random x cards etc sounds like an epic nightmare that just wont work in reality.



    You edited your post too..


    Anyways, updating can be as easily as firmware or driver update, etc.

    What I do not understand, is your unwillingness to admit that the Hydra chip would know the general capabilities of each video card on it's bus.



    All that is pre-determined by the actual architecture of the chip. If you are playing a game with heavy shaders and having two different video cards, the Hydra chip would know that the 800 SIMD's on the HD4870 would make more efficient use of this, etc.



    Secondly, I think Lucid has been mum about their work because of their patents. I think they did solve what has been plaguing the multi-GPU market and have been busy refining their product. Why would nVidia or AMD embrace this? They are cutting the SLI market off at the head. Because even an old 8800 is no longer worthless. you do not have to keep buying newer and newer cards. Someone can have a HD3850, HD 4850 on the same system and have more graphics power than a HD4870.

    Where-as before the HD3850 would be relegated to an older system, become a mantle piece or given away...





    .






    .
    Last edited by Xoulz; 12-27-2008 at 06:30 AM.

  23. #123
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Xoulz View Post
    You edited your post too..


    Anyways, updating can be as easily as firmware or driver update, etc.

    What I do not understand, is your unwillingness to admit that the Hydra chip would know the general capabilities of each video card on it's bus.

    (Removed useless image)

    All that is pre-determined by the actual architecture of the chip. If you are playing a game with heavy shaders and having two different video cards, the Hydra chip would know that the 800 SIMD's on the HD4870 would make more efficient use of this, etc.





    .
    You are assuming abit too much already.

    First of all that the Hydra chip knows its a shader heavy game.
    Secondly the Hydra chip have no clue what goes over the PCIe bus to the card itself. Besides textures and framebuffer copy thats is easily read. But shader instructions etc...nopes.

    The Hydra chip aint the key, its a semi dumb device. The software needed is.
    What the hydra chip will do is simply splitting tagged data and combining 2 framebuffers and send it back to 1 GPU.

    The combining is the easy part. The first part might stop after a simple driver update from nVidia/AMD. Since the data it needs to spy on and tag is different. Unless AMD/nVidia actively backs this. It wont go anywhere but consoles.

    Plus this have the potential to make microshutter look like nothing.

    Also your edit about using different generation cards is utter garbage to say it mildly. Sorry. But you have a featureset and a game engine. And you would sit with previous card features instead. As you would have to follow the lowest determinator.

    Just like physics. Using old cards is a joke and will always be.

    http://ati.amd.com/technology/crossf...ics/index.html

    Btw, did you notice the complete lack of FPS numbers in the Lucid demo? I mean if I had something with 100% scaling. I would show 1 8800GT vs 2...specially when you claim it.
    Last edited by Shintai; 12-27-2008 at 06:40 AM.
    Crunching for Comrades and the Common good of the People.

  24. #124
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Shintai View Post
    You are assuming abit too much already.

    First of all that the Hydra chip knows its a shader heavy game.
    Secondly the Hydra chip have no clue what goes over the PCIe bus to the card itself. Besides textures and framebuffer copy thats is easily read. But shader instructions etc...nopes.

    The Hydra chip aint the key, its a semi dumb device. The software needed is.
    What the hydra chip will do is simply splitting tagged data and combining 2 framebuffers and send it back to 1 GPU.

    The combining is the easy part. The first part might stop after a simple driver update from nVidia/AMD. Since the data it needs to spy on and tag is different. Unless AMD/nVidia actively backs this. It wont go anywhere but consoles.

    Plus this have the potential to make microshutter look like nothing.



    The Hydra chip is all about LOAD BALANCING.....! It's main job is to know what the game is doing, and to know what resources are available to it and using the best graphics card to render aspects of the game, etc.

    How this is actually done has not been released as has remained a secret..!

    Though, with so many patents involved, one would have to suggest that it is true. Doubting it is natural, but to turn a blind eye and say it is a hoax is being a tad ignorant!

    Even if it is 80% scaling with some graphic anomolies... it is a win/win for gamers.




    .

  25. #125
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Xoulz View Post


    The Hydra chip is all about LOAD BALANCING.....! It's main job is to know what the game is doing, and to know what resources are available to it and using the best graphics card to render aspects of the game, etc.

    How this is actually done has not been released as has remained a secret..!

    Though, with so many patents involved, one would have to suggest that it is true. Doubting it is natural, but to turn a blind eye and say it is a hoax is being a tad ignorant!

    Even if it is 80% scaling with some graphic anomolies... it is a win/win for gamers.




    .
    Im not saying its a hoax. And you now say that Lucid will test and make profiles for every game? You cant load balance something if you dont know what it is. And you need software to tag and determine it.

    I guess you have your mind set on that this is a PC revolution. Where I just think its a console revolution. The Hydra 100 chip aint even PCIe 2.0.



    What DirectX versions are supported or will be supported and what about OpenGL? Right now, only DX9 is working though DX10.1 will be ready by the end of the year. With DX10 and DX11's implementations of multi-GPU data improving and adding to the HYDRA Engine technology will only get easier for team compared to the work they had to do on DX9. OpenGL is supported by the HYDRA Engine as well.
    Of course, not all is golden for Lucid quite yet - we have some questions and concerns about the technology that we hope will be addressed as the technology matures. Top on my list is the support that Lucid will be required to maintain if the technology succeeds. While much of the HYDRA Engine is automated there will be times when new games, new game engines and new rendering methods will be implemented by game developers that will require continual updating and tweaking on the driver side of the technology. With as large as NVIDIA's and AMD's driver teams are, even they cannot always keep up with the many games that are released throughout the year.

    My other major concern is that this technology could end up like AGEIA's PhysX - great potential but gobbled up by one of the mega-players rather than turning into a product on its own. Honestly after hearing the entire presentation I was curious why NVIDIA or AMD hadn't already thought of this - the potential for being bought up is extremely high here.
    How can Lucid be sure their task based distribution methods accurately represent what the game designers intended? An interesting dilemma - with the company essentially taking control of the graphics pipeline there are all kinds of ways for the company to accidently screw some things up. Lucid answered this by telling us their quality assurance program was already well under way. In fact, they use a pixel-by-pixel comparison engine comparing the HYDRA images to a single GPU render to check for errors or problems.
    As for the chip itself, obviously Lucid is being very close lipped about it. The chip runs very cool and draws just about 5 watts of power. Inside the chip you will find small RISC processor and the custom (secret sauce) logic behind the algorithm powering the HYDRA Engine. The production chip was JUST finished yesterday and will be sampling to partners soon - though they wouldn't indicate WHO those partners were.
    Last edited by Shintai; 12-27-2008 at 06:54 AM.
    Crunching for Comrades and the Common good of the People.

Page 5 of 7 FirstFirst ... 234567 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •