Intel has the Lucid Hydra chip on the next revision of its X58 mobo:
http://forums.vr-zone.com/showthread.php?t=368843
Printable View
Intel has the Lucid Hydra chip on the next revision of its X58 mobo:
http://forums.vr-zone.com/showthread.php?t=368843
Interesting move... I'd like to see it in action.
at last
http://www.theinquirer.net/inquirer/...d-hydra-on-x58 apparently the source?
Very unlikely....as in no.
Charlie needed attention again.
It would be great if it was true I read somewhere that Intel was one of the investors of the Lucid company as always time will tell if it is true
ATi and nVidia struggled with their faulty CF and SLI for years, so this might be a kicked in the nuts for them and the end of CF and SLI, wille Intel takes the glory with Lucid Hydra technology. :)
Yes and its old and a prototype demonstration. I can show you endless pictures of things thats been shown thats either.
A: Not ended as anything.
B: Not implemented yet or anytime soon.
And that picture is not from a smackover board...
http://news.cnet.com/8301-17938_105-10021005-1.html
http://www.anandtech.com/tradeshows/showdoc.aspx?i=3379
http://i.i.com.com/cnwk.1d/i/bto/200...0s_540x405.jpg
Well according to the last "real" news. Lucid said it should be ready in Q1 2009 and would most likely be attached and sold on mother boards. So, it could be bull, or it could not.
All i have to say about this is, when it does hit, whenever that is....BOUGHT!!!
Also, I love the guys comment on the bottom about Lucid Hydra being "just" an alternative to SLI or Xfire, like its no big deal, it is MUCH, MUCH, MORE, than an alternative!!!
It will more than likely Kill Xfire and SLI, or make them change radically.
if this works as proposed, my GOD will that create some interesting possibilities!
Looking at how bad cf and sli works sometimes, and both are configurations with same brand cards, put together 1 ati and 1 nvidia will be madness.
I can bet my PC on this, not going to happen.
Would be interesting to see a 48701GB and GTX260 216 running side-by-side with 100% scaling
It could happen... If Intel's dumped enough licensing and cash into Lucid.
AFAIK they are a major contributor/part owner. Sounds too good to be true though. Which means... Well, you guys know...
well it doesn't matter you still need 2 cards and pay twice the money. :p
So no thx...
but your old cards that you've not sold can become useful
Last i heard was that you'll only be able to use any 4 cards from the ONE vendor together, i.e: nvidia with nvidia or ati with ati. You'll still be able to run cards from different generations together though, i.e: 280GTX + 8800GT etc.
"Also, just to be clear, you will not be able to use graphics cards from both Nvidia and ATI at the same time. This is because Windows does not support the use of two different graphics drivers while rendering a 3D application."
http://news.cnet.com/8301-17938_105-10021005-1.html
I would love to see this go into effect soon. It'd be awesome to see it happen soon. Will this chip provide extra PCIe lanes though? Or will it be optimized so well that it doesn't matter much?
Dont know, lots of different sites around the web saying the same thing, some sites have interviews with the devs, have a look.
I thought the chip worked by intercepting the data from directx, then applying the magic before passing it on to the gpu driver which would definately limit it to the Windows driver model which i think indeed doesnt support more than one graphics driver for 3d rendering. Lucid aren't making a special driver for nvidia or ati cards as far as i'm aware?
Here, more info: http://www.anandtech.com/tradeshows/showdoc.aspx?i=3379
so, just out of curiousity, will this also enable sli/crossfire like functions for let's say matrox graphics cards?
I'm just hoping it also scales using multiple monitors.
i thought i also heard that the lucid chip is the one that windows sees the drivers for, and then the chip manages any combination of cards
That sounds like very complex drivers. Or how would the chip/drivers talk to the cards and utilize all their abilities.
You aint gonna go nowhere without the AMD/nVidia etc drivers.
Plus with all the going back and forth. The possible latency/shutter issues could create havok.
Also if its the chip that handle it. Will we need a new mobo with a new DirectX/OpenGL version? Or Perhaps even worse with another thing like OpenCL.
Even with the Hydra 100 chip. There is no OpenGL 3.0, openCL or DirectX 11 support. Plus its PCIe 1.1 and not 2.0.Quote:
Lucid, with their Hydra Engine and the Hydra 100 chip, are going in a different direction. With a background in large data set analysis, these guy are capable of intercepting the DirectX or OpenGL command stream before it hits the GPU, analyzing the data, and dividing up scene at an object level. Rather than rendering alternating frames, or screens split on the horizontal, this part is capable of load balancing things like groups of triangles that are associated with a single group of textures and sending these tasks to whatever GPU it makes the most sense to render on. The scene is composited after all GPUs finish rendering their parts and send the data back to the Lucid chip.
Quote:
Lucid promises that you'll be able to mix and match older and newer cards from the same vendor. For example, an Nvidia 9800 will work with an Nvidia 6800. How far back the support goes, however, will probably depend on whether the two cards in question are supported by the same driver. While the ability to mix and match different cards is supported by ATI's Crossfire, it is not something the Nvidia supports with SLI.
Also, just to be clear, you will not be able to use graphics cards from both Nvidia and ATI at the same time. This is because Windows does not support the use of two different graphics drivers while rendering a 3D application.
Directly from Lucids Site............http://lucidlogix.com/technology/distributed.html
http://i303.photobucket.com/albums/n...220-133003.jpg
The HYDRA real-time distributed processing engine can:
Deliver cost-effective graphics performance with a near-linear to above-linear performance.
Operate in an adaptive, dynamic, real-time manner.
Eliminate bottlenecks that exist in typical 3D graphics applications.
Provide interoperability with all GPUs and chipsets.
Work with the latest versions of DirectX and OpenGL
look at my post directly above yours Shintai. what does that Info look like it means to you? Are they trying to give us the wrong idea?
Nope. But you miss a key factor. The OS.
Sure it would work on say Linux. Just not on Windows.
And directly from the horse:
Quote:
Lucid promises that you'll be able to mix and match older and newer cards from the same vendor. For example, an Nvidia 9800 will work with an Nvidia 6800. How far back the support goes, however, will probably depend on whether the two cards in question are supported by the same driver. While the ability to mix and match different cards is supported by ATI's Crossfire, it is not something the Nvidia supports with SLI.
Also, just to be clear, you will not be able to use graphics cards from both Nvidia and ATI at the same time. This is because Windows does not support the use of two different graphics drivers while rendering a 3D application.
Thats second hand Info is it not?
Tho I do see how basing there statments on Linux could be there direction to juice up the marketing but I dont think they would strait up lie about it
You mean info directly reported from Lucid people?
They aint lying about it. You just assume that its only for your Windows platform.
Also think of it. Writing drivers for AMD and nVidia cards is a huge process. You think Lucid could just slap its own driver together that worked on both? There is a reason you saw 2 8800GT cards.
Personally I think consoles will be the prime target.
Although I would rarely do so, I agree with Shintai:
That's directly from Microsoft website. http://www.microsoft.com/whdc/device...imonVista.mspx (Yes, it is a multimonitor thread, but it covers this issue just fine).Quote:
If multiple graphics adapters are present in a system, all of them must use the same WDDM driver. If there are two graphics adapters with WDDM drivers from two different manufacturers, then Windows will disable one of them. The VGA adapter will be enabled, and the second device will be disabled.
Notice that XPDM drivers still support heterogeneous multi-adapter as they did in Windows XP. A user who has such a configuration working fine in Windows XP will encounter a problem when upgrading to Windows Vista. An external monitor connected to one of the graphics adapters will have no video signal, because it is disabled. An error message will appear on system boot, as described later in this article.
The solution for this problem could be as follows:
•
A user could force the installation of a XPDM driver for each of these devices, and therefore get heterogeneous multi-adapter multi-monitor to work as in Windows XP.
-Or-
•
The user could change the graphics hardware configuration by choosing multiple graphics adapters that use the same WDDM driver. Graphics adapters from the same ASIC family generally have the same graphics driver. In late 2006, each of the major graphics vendors had a single WDDM driver for all supported WDDM graphics adapters. Please consult the graphics vendor's Web site for details on their driver support.
Background Notes: This restriction only affects a system that has WDDM drivers. WDDM was designed with stability as a key objective. Based on information gathered through Windows Error Reporting and the related Online Crash Analysis for Windows XP display drivers, Microsoft decided to simplify the graphics stack in Windows Vista.
The use of multiple graphics adapters occurred when graphics hardware vendors did not expose multiple connectors on graphics adapters. Today, almost all modern adapters support two or three connectors such as DVI, VGA, and S-Video. Also, most OEMs are now offering SLI/Crossfire configurations that support two or more graphics adapters that could also be used to connect more than two display devices when not in SLI/Crossfire mode.
I can't imagine how Lucid would make mix-and-match work without Microsoft's and/or ATI/Nvidia's support.
Win7, XP or even Linux are different things (as they support multiple WDDM drivers), but I doubt that they can make it work there either..
If they can do it however, I'll be the first one to applaud. :up:
Actually Junos. it wouldnt work in XP either. Because they still cant overlap when rendering 1 3D application. On XP you could have 1 system. 1 AMD card, 1 nVidia card. But you also need 2 3D applications to use both.
I remember what Lucid were saying back then and what they are saying now that the product is around the corner are two different things. I have no doubt that they have the technology to use any vendors cards together but Shintai is absolutely right, what can be done in Lucids lab and what Windows can do with the technology in our homes is very different. With Intel as a major backer of this project you can be sure that Larabee will probably gain the most out of the tech anyway. From that business perspective Intel stands to sell more gpu's in that segment if there is no cross vendor capabilities. Purposely being crippled? Who knows?
I wouldnt call Intel as a backer btw. Its Intel Capital. They are simply a very broad investment company.
https://www.intelportfolio.com/portco/cps/
this thing would be perfect on an X2 card instead of the PLX.
The only way I could see it working like that image is if the Hydra chip contains all the data needed to to operate all the possible graphics card options. So Lucid would either have to get AMD, nVidia, Intel, S3 etc to all submit the code to Lucid so their cards work OR have all the driver code written/customised(hacked) by Lucid and built into the chip via Firmware/BIOS updates.
It's just too pie in the sky with current Driver, OS & Manufacturer capabilities. It would take a miracle of them working together or patents/copyrights being broken for this dream to happen. For 2009 the best I can see it doing is enabling CF/SLi/whatever Intel uses/MultiChrome on any board, as long as it has a Hydra chip on it & possibly better scaling, which I have yet to see.
honestly, what percentage of the people who would use lucid, would also use cards from two manufacturers. just being able to use any ATI card combination (or nvidia if thats your company of preference) would be enough to sell me on the idea. ive wished for xfire or sli to support older and new cards at the same time, i usually spend 300$ every 2 years and the gap is too far apart for them to be compatible.
but i also wonder how the card companies can learn from this and try to take advantage of it too. all the motherboards with onboard graphics, may one day be used just like the lucid chip, to determine how to allocate resources to any graphics card combination (within their company)
read more guys. How this works is it supports any cards, as many as can fit on the mother board supposedly that are supported by the same driver, so no ATI and Nvidia until hacked drivers from where ever can do that, the "one" driver has to support the cards used together.
Supposedly, what Lucid said, it will be "near" 100% scaling, so the card you add, will add as much performance (almost) as the performance it gives by itself.
If you dont know, SLI or Xfire, DOES NOT do this, not even close.
by the way, reading some of these posts, i never remember Lucid saying anything, ever, about mix matching companies, what i said above, is what they have said many times, in the demos, so I dont know where some of you guys are getting your stuff, maybe i havent seen it yet, or maybe its rumor.
:google:
http://www.theinquirer.net/img/9979/lucid_rendering.jpg
It's all right there in my Sig. I told everyone to look into Lucid... the company, just not it's product; (Hydra Chip). Th engineers designing this thing has some serious credentials.
If this thing is for real, we might be saying bye - bye to SLI/Crossfire... perhaps that is what AMD plans to use that extra bus for... something to help the Hydra chip ...?
:shrug:
Guess my old X1800XTX might be worth something...?
.
I think we are all imagining the same thing, double GPU type, one PCB or Sandwich type double PCB, double GPU with Lucid's Chip would make an awesome GPU card if Lucid's chip works as advertise so 2009 Q1 we'll see :D
Metabyte was able to do image splicing.
http://www.sharkyextreme.com/hardware/metabyte_onsite/
edit - more info on "Stepsister" ;) http://www2.sharkyextreme.com/hardwa...byte_tntsli_p/
makes me want to find a pair of Metabyte TNT cards to play with..
with different cards i didn't mean ati/nvidia, but also cards of different generations from the same vendor... still, this doesn't answer my question: will lucid use it's own drivers and one still needs the gpu drivers installed? how do these 2 devices (the lucid chip and the gpu) work together in the os? etcetc...
i think you misread what i wanted to say. i said "sli/crossfire like functions", and not that it'll use sli/crossfire. and with "sli/crossfire like functions" is just mean mutlitple graphics cards at a time... :rolleyes:;)
They had a video of a demo of the system, and IIRC it had to different nV cards in it? The demo showed 2 screens of what each GPU was rendering. And so each screen had black or empty textures. So somehow the Lucid chip is delegating rendering of individual textures within a frame?
I dont know, but I cant wait to see it.
Nopes. Itw as 2 8800GT cards. And from the same maker etc.
http://i.i.com.com/cnwk.1d/i/bto/200...0s_540x405.jpg
The Lucid chip is doing OpenGL/DirectX kind of intercepting in the chip I think (or OS via a driver that tags the framesetups). I´m just guessing. But its pretty limited what it can do if its hardware locked. One thing is sure, it collects the framebuffer resutls from both cards and send it back to 1 for display in the end.
Some facts is that the Hydra 100 chip is PCIe 1.1 and support DX10.1 and OpenGL 2.1. It doesnt support OpenGL 3.0, OpenCL, DirectX11 or PCIe 2.0.
It honestly look to be very very ugly for the PC scene. But it would be nice on consoles etc that you control the HW on.
http://www.lucidlogix.com/files/hydr...duct_brief.pdf
I fear that there has gone too much PR/hype in the Hydra chip.
"So, how do the multiple GPUs connected to a HYDRA ASIC relate to the host system? The Lucid HYDRA Engine is capable of bringing together dissimilar cards to work on rendering tasks - however, there is one limitation: All GPUs must be able to share the same software driver. That means you can forget mixing and matching cards with ATI and NVIDIA GPUs in the same HYDRA configuration. Cards from both makers can theoretically be present in the same system, however - you could have a HYDRA array of ATI Radeon HD 4870s handling the video rendering, with a low-end NVIDIA GeForce 8600GT handling PhysX acceleration, for example."
http://techgage.com/article/lucid_hy...u_technology/1
Heres a good article that breaks it down a bit more. It also throws up some interesting questions as i get the impression that Lucids driver is probably gonna need specific game profiles to make sure the right algorithms are used. There is some interesting multi-gpu ideas too in there. I'd love to know what nvidia et al think of this to be honest?
the fact that its rendering only partial sections of the environment is great.
were so use to CF and SLI that alternate frames or do half a screen each, end up costing the same memory for each one. if the lucid can keep half the textures on each, it would be killer to see 2x 4870 512MB do just as good as the 4870x2 2GB at any resolution/setting.
the scariest thing seems to be latencies, but we cant argue that this thing is just a great concept that we all hope brings new ideas to how gpu scaling can really should work.
and for the console idea, i wonder if this would help make the nextgen system use many inexpensive gpus, and let the programmers decide which one gpu handles which environment pieces.
YES, it can use cards from different generations, as stated many times, ANY CARD supported by one driver, so obviously looking at the NV card support list on some of the drivers, you will be able to use cards from different generations with them.
To answer about working together with drivers in the OS, not sure exactly how that will be handled, but this I know, Lucid intercepts the lanes before they ever reach the video cards and then sends the information accordingly, so there might be some software used inside the OS, but I expect not much, and for it not to be a conflict with drivers much at all, since the information they recieve is still the same, just on a different pay load scale.
Also as for SLI/Xfire functions, not sure exactly what you mean by that. As stated before, put SLI/Xfire out of your mind, this chip, again, intercepts info before ever reaching the GFX cards, so multi gpu functions wont be needing to be "enabled" or anything of the sorts.
sorry for mis interpreting what you said.
i would suggest reading this http://www.anandtech.com/tradeshows/showdoc.aspx?i=3379 to understand exactly whats going on.
2x :banana::banana::banana::banana:, imagine 4, or 6, supposedly, lucid said it supports up TO 16!!! :shocked:
Put it this way, if Hydra works the way they say it is going to, anyone with money will be able to have 100 FPS constant in any game they choose, at any resolution and settings. The only hold back, will be how many PCIe slots are available on the board.
The crazy thing is too, that this will be achievable with the cards we have now, and those will still be usable with cards that come in the future!!!! Imagine what PC gaming will be like in another 5 years, if this all goes as well as we hope.....just wow!!!
I do not believe that only intel will have this on their smackover board, Lucid would have contracted with other manufacturers like Asus, Gigabyte(the usual suspects).
Pics or it didn't happen as well.
This seems to good to be true, and ATI and nVidia will probably do everything they can to stop this technology from coming out.
If ATI and Nvidia who have been in this market forever still have problems with drivers then I have my doubts about Lucid.
uh... k..
errr wtf are you talking about?
NVIDIA and AMD have a hard enough time making their multi-chip solutions work without issues. They have the ability to design their hardware and software however they want for this to work well, and there are always still issues with it. This should tell you something.
If the masters/creators of the technology can't make it work flawlessly, what makes you believe some random no-names with no history of making anything will be able to stick in an extra layer of hardware and software and have everything work perfectly, if at all?
And as Shintai said, you can't have 2 drivers active at the same time with the LDDM/WDDM: http://en.wikipedia.org/wiki/Windows...y_Driver_Model
Therefore you can't run an ATI with an NVIDIA board. Even if you could.. why in the hell would anyone want to? You really want to open the door to all the chaos that can ensue at the software level? You know both drivers have been designed under the assumption that they would have full control over the systems resources at runtime with a 3d application.
Also, why would you want to run a slower and faster card together? Do you really want to invite micro-stutter into your gaming experience with every odd numbered frame rendering fast on a faster GPU and every even numbered frame rendering slower?
What is the point of having a high framerate average if it's not smooth when you play it, and therefore feels like a lower framerate? The most important thing in the end is your experience. No one wants the framerate for the sake of it.. if you do you're missing the point, or you're just a benchmonkey who doesn't game :)
I don't know about you guys but if someone tried to stick this chip that probably won't pan out to be what it's said to be onto my motherboard, and it's sucking up power, I'd probably want to smack the people who thought it would be a good idea. Maybe that's just me though ;)
Sr7 you make some good points but I think you got it a bit twisted when it comes to not using identical cards. Hydra doesn't use AFR, each frame is analyzed and the rendering required is divided between the two cards. A weaker card can be assigned less work in the same frame proportionally to its power, its not SFR either where they just draw a line and assign sides to cards, if you look at the demo screenshot you will see that while one card is rendering the floor the other one is rendering the wall in front of the player. I'm not saying that it would work perfectly with cards of unequal power but with Hydra both cards render all the frames together so there should be no microstutter due to sync latency (although I wonder if the Hydra chip will introduce some latency when it's figuring out how to divide the load).
Ah yes, okay, so microstutter may not be an issue, but really.. how would the hydra chip know how much work to give each chip each frame? It can't know the chips bottlenecks or general performance and account for them in realtime.
There are all kinds of quality issues that would come into play too.. say 1 frame an older NVIDIA 6-series renders, the window or some texture. Then the next frame it's done by a newer 9-series. You'll get different image quality each frame? This is still totally not feasible.
If it were worth it, it would've been done by now. And you're right. Adding more latency into the process is a bad idea. Mouse lag ftl.
I see what your getting at, but it does work, has been demoed quite a few times, and tested by reviewers, its not like there no proof here. They have tested it with many different types of cards.
You do bring a good point about image quality on said card, and I for one am also curious about this.
But one thing you are hugely mistaken on.
not true at all, there are SO SO many things taht are hard to adapt to because of the market or pure greed.Quote:
If it were worth it, it would've been done by now.
Think about it...if your were a corporate company with share holders staring at you, would you choose SLI, or work on a tech like Hydra? The money would be in SLI, but only if the Hydra chip isnt in existence. Nvidia and ATI didnt want a better solution, casue to have SLI and Xfire you sell more new series cards, rather than giving the consumer a chance to use old ones already bought for multi GPU, which makes total sense on Intel funding this, they are just breaking into the market with GPU and need something to combat SLI/Xfire, to the grave, and this is it. Whether they will end up making decisions ultimately with the project is yet to be determined.
The quote you said is what we wish was reality, but in reality, its not true. Money controls market, not accommodation, or simplicity, or efficiency.
Honda jsut recently finally made a Hydrogen car that is no different than a normal car, an accord, top speed 125, 0-60 in 9 seconds, blah blah blah, same as any other car, but runs on Hydrogen, they are the first to do it, now how long do you think this will take to become the normal car? Im guessing damn near till oil runs out.
Hell cars that fly are closer than you think if some scientists would take the time to do it, but they are in no hurry, cause it would crush alot of market, tires for instance and other things.
Same goes for the Hydrogen car, one moving part, no oil changes, less maintenance=mechanics out of work, and so on.
Now this tech doesnt really fall under the same category, but you get my point.
While I understand your point, I have to say that you are wrong here. I'm not trying to come off as rude but your response is just a bit naive. Here is why...
First.. to compare the status quo of oil (mega corporations) to the state of multi-gpu is a bit of a non-sequitur. Basically that in itself is not comparable, because oil lobbies governments like crazy to get special interest wedges in place. That said, there is *absolutely no reason* why AMD and NVIDIA wouldn't have done this if it made sense and worked properly. It would give them 1 more chip (see relevance) in the average enthusiast system, which they could keep from Intel and hold over their heads. At some point you need to stop and realize that maybe it wasn't done because it doesn't make sound technical sense from the driver or latency or image quality perspectives... things aren't always about "evil corporations".
The analogies you make also fail in that in the tech industry... if you think you can just hold onto technology without changing for decades, you're gravely mistaken (in this industry more than any other). Look at what happened with hard disks... SSD's came to the marketplace and the hard disk manufacturers who were not prepared for/invested in this future tech were caught with their pants down. Now almost all the patents and technology they have spent years building are irrelevant. It's a vastly different business when the mechanical element goes out the window and flash memory comes into the picture. The smart guys hedged their bets and are now making SSDs with IP they invested in years ago. The slow guys who thought they could sit on their tech forever got burned, and are now making attempts to sell off their hard disk drive businesses, as the average price for a HDD plummets (no profitability left in the market).
Secondly, SFR (split frame rendering, what this Hydra technology uses) used to be the main method multi-gpus used... until AFR came along and became the standard. Now all default crossfire and SLI profiles use AFR. Why? Better scaling. If you take any NVIDIA GPU and test the default SLI profile, then compare to SFR scaling, you'll see my point (same goes for ATI but you can't force SFR on their products through their CP).
The way that 3d graphics work, you can't just arbitrarily chop up a frame and send it to two different cards and expect a proper image to come out of it. What about data that spans into both sub-frames? States get set in DX runtime calls, and you can't just round robin the draw calls.. you'd have to make each GPU fully aware of the rest of the frame also, meaning you have inherently duplicated work, even if they're not each computing the whole frame.. they're still overlapping, which is always a bad thing here. I have no doubts they've demo'ed the tech but I imagine they have some hacks in place to work around shortcomings. If you think you have driver problems now with SLI or CF, wait till you add more latency and another layer of software to the mix.
Last, as I mentioned before, the reason microstutter exists is the same reason Hydra can't effectively dole out work to arbitary GPUs... it has no way of knowing how long the different chips are going to take to process each of the frames, especially because framerates in realtime games can vary wildly from frame to frame, so you can't use knowledge from prior frames to offset the present of the new frame, because maybe that frame was way faster or slower than the 20 preceding it. Also, knowledge of each GPUs capabilities, let alone their speeds at different parts of the graphics pipeline are impossible to effectively asses, especially for 3rd party software with no access to the GPU/driver internals. It's hard enough for the chip to know definitively "Okay this chip is this fast relative to this chip, I need to send 20% of the frame to the slower chip", let alone the even more complex issue of "when I send 20% to the slower chip, how do I predict if that architecture of GPU will be quite bottlenecked for this particular piece of geometry vs. the other GPU architecture"
My final gripe would be the fact that Lucid is putting out beautiful marketing slides and getting peoples hopes up. They're promising 100% scaling no matter what. The sad truth is this will just not work the way they're selling it. Crossfire and SLI take a lot of flak for not always scaling to expectations. What people don't realize is that you can only scale up to the point where you are CPU bound and have no more graphics work to do, so many times this is why things don't scale further. Adding more graphics cards to the equation after the point of complete cpu-boundedness will only hurt framerates, not help. Also, there are issues around inter-chip communication. Currently you need SLI and Crossfire profiles to tell the driver "hey I know the game developer didn't clear this re-render this resource, or clear this to signal that they don't need you to retain any data, but you can safely discard this data and no corruption will occur". Lastly, if a game isn't scaling because of CPU/GPU synchronization, you won't see scaling either. Hydra won't fix that.. so much of this lies in the game developers ability to properly handle these cases (or a profile to exist for each game, as they do in NVIDIA and ATI drivers).
Without this, you either take the conservative route to avoid corruption and assume the game developer meant to have the present frame depend on the previous one (and transfer between GPUs, hurting scaling, which can't happen with the Hydra system), or you take the aggressive route and say "I don't care if the developer expected me to preserve some data between frames because they default to programming under the assumption of 1 GPU with an "untouched" set of memory from the last frame... kill the previous frame data and risk corruption anyway!"
The bottom line is if you have multiple GPUs they need to be able to talk via the driver and be aware of one another, and this isn't possible in the case of different manufacturers, or a non CF/SLI configured setup.
There are so many reasons this is a bad idea (though it's great in theory), you just may not realize it yet. It's a bit of a pipe dream in the long run IMO.
Bro..
The GPU is now thee Hydra chip, when you mount a new video card to the subsystem, it knows what it's those card's capabilities are, and then the Hydra chip sends tasks to it.... except now, the Hydra chip uses as many ROP/Texture Units/Stream Processors as you have...!
How hard is it to understand, that the Hydra chip is a microprocessor that control the PCIe bus and ferret out's the hardware and makes it it's b!tch..!
You simply cannot understand, because you have not look at this objectively, but as a naysayer, like always..!
.
i dont think the hydra uses SFR, split frame is the idea of rendering half the pixels (top half vs bottom half), not half the work. no CF or SLI idea seems to do anything like what the hydra is doing. and i wouldnt be too worried about latency issues. its not trying to send gigbytes of bandwidth of textures to each card, it just tells the card which to render. the end result after AA and AF is applied can quickly be moved around with almost no lag. and besides who says there wont be driver updates so the hydra knows what its working with better. i am 100% for this idea over any CF or SLI solution, im tired of throwing away my old cards
from the august Lucid newsQuote:
The HYDRA 100 Series is available now for customer validation with volume delivery expected in Q4 2008. Consumer level devices are expected to reach the market in the first half of 2009.
If this is true, intel may be able to deliver or announce a new board soon.
I hope so ...
Why are most people getting hung up on the cross brand compatibility between an ATI and Nvidia card?
This things biggest bonus to me seems to be increased performance between 2-3 of the same cards working together in a better system then SLI or crossfire.
Doesn't this chip allow for all the cards to use their full ram instead of limiting the ram size of 1 card? Or was that false?
Should allow each card to use the memory to its full. Since the textures aint there (Black parts). But Lucid have been very poor to release any detail. So also could be limited. Tho it would be very unlikely.
However you need to hide it from the game etc somehow. Or override it. Else the games or DirectX might just say 512MB and not..1536MB forexample to the game engine.
all that for no reason, first of all, i end my post with this...
second i never said anything about evil corporations, ever, I said market and money controls, which is hardly evil, its logical, and its what has to be done in the eagles eyes of the stock holders.Quote:
Now this tech doesnt really fall under the same category, but you get my point.
While things upgrade rather quickly, companies still try to hold onto things that are big money makers, its obvious, look how long Nvidia dangled around the G80, and SLI and Xfire now has been around for quite some time.
YOu wanna make this massive post about how you dont think this works, yet it has been demoed many times and without fail. You saying the demos are fakes? Now who's talking evil corporation.
Not trying to be offensive. Just it seems you are arguing something with no "direct points" I understand some of your points, but where do they lead?
Its like your almost using your points to say Hydra is bull:banana::banana::banana::banana:. It doesnt exist and wont work.
When they are saying release is Q1, chip is done, and explain what it does and have showed what it does.
So they are lying about it all?
I too want to know some of the answers you are seeking, but you and I are different hear, you are asking the questions like the Hydra chip is a theory. When i want to ask the questions to know the answers that I know exist.
your coming off almost as a conspiracy theorist. Like its some big bull crap all made up to get people going.
I must say, you are one depressing cat.
Wow....just wow.....
Read the thread mate and the links posted too. Here's another one with explanations and original questions. http://www.pcper.com/article.php?aid=607
You need to understand how they're load balancing the gpu's buddy, there will even be situations when one gpu will be rendering ALL textures etc while the second gpu is running just the lighting passes or different effects. AFR and SFR will be used ONLY when it is the best option for that PARTICULAR frame as the Lucid Hydra will be CONSTANTLY changing the way it balances the load across gpu's. Thats the magic.
Thinking about an earlier concern regarding image quality across different generations of card, not sure if it will be a problem as the Lucid chip is recompositing the image on a per pixel basis and outputting via the one gpu connected to the monitor you're playing on, i.e: if i had a 260GTX paired with an 8800GT i'd be using the 260GTX to output to the monitor so iq should be moot. A concern i do have is how will AA and other filters be applied?
There's a difference between lying and inflating the truth while trying to get investors. I'm sure they have something they've shown as I alluded to already, but it's not as easy as you guys think to make this type of thing work, let alone work fluidly.
Come on, you honestly think you can have their 3rd party tech know how much work to send to each GPU? How can you get 100% scaling unless you perfectly divide the work? If the workload is different (one does geometry, one does textures) as you say, how could you possibly know the amount of time it would take to render each portion, to ensure you're getting maximum overlapped rendering (i.e. scaling)?
Not to mention the entire point of a graphics pipeline is you're bottlenecked somewhere in it... if you're bottlenecked by texturing, it won't do you any good to split off the geometry rendering phase to another GPU, because you'll STILL have that same texture bottleneck in the chip doing the texturing. Do you guys understand how bottlenecks work at all? You're only as fast as your slowest point in the chain.
Different generations of chips do operations at different precisions.. you'll end up with different colors running the same thing through each chip. You really want to combine those into 1 final image?
Clearly you guys are sold on this but I really think there is a lot of naivete here. I'm not accusing any corporation of lying, but I think you guys are just lapping this :banana::banana::banana::banana: up. Clearly there are a lot of people posting here who don't understand how this stuff works, and that's okay.. but please don't try to flame someone who clearly has more knowledge on the topic.
by measuring latency of the workload sent and adjust accordingly. its really not too hard to have a monitor built into the chip that checks how long it takes to have the frame information of each gpu replied back. and who knows if the hydra chip will come with a benchmarking type program that measures the gpu(s) for performance, and uses that info to know which is better for which features.
That wouldn't work, otherwise microstutter would be solved by now. You can't just "measure and adjust" because you don't know what the next frames frametime will be. Each frame can be wildly different, run fraps sometime to see your frametimes on a per-frame level, then look at it and ask yourself "what could a LUCID driver extrapolate on to ensure it knows how fast 1 gpu is relative to another?" Each chip will be faster or at different points, depending on clocks and arch bottlenecks. They can't just predict that kind of stuff.
If you did what you say, you'd end up slowing down potentially faster frames.
Uh, I'm not talking about how Hydra works, I'm talking about general rendering... clearly many people have been swooned by the idea and have bought in full bore :shrug: Call me a skeptic.
People should be careful about Hydra hype sofar. it can turn out to be great. It can also turn out to be nothing than a turd.
2 week wait...I am sure we will all see this at CES
Sr7, you keep saying 100% scaling and they have never said thats constant, or even frequent, they have said "near" 100% scaling. Just to get that straight.
Who knows what near is, but obviously its better than SLI/CF, by atleast a good margin, or it wouldnt be going in to production, better is good.
oh and "inflating the truth to get invenstors" doesnt really make sense, this has been Intel backed for quite some time now, not much more investors are needed over that, and
incase you havent noticed, Intel delivers, most of the time, on what it promises, I very much dought they would invest and support something that isnt gonna do the same.
Ill admit, this could turn out to be something a lot less than what we hope, but I have no dought it will be a better alternative to SLI/CF.
100% scaling is a possibility and given recent performance figures for SLI/X-Fire 100% (or very near) could be very doable in many situations given that you are using the proper hardware.
I suggest you guys all crack a cold one and sit back, there's no reason to be speculating like this when it could turn out to be another Bitboys... or Phantom.
im still very sceptical about hydra... if it would be that easy to get multiple gpus to scale almost linearily, youd think that ati, nvidia or intel or anybody else would have come up with something like this a loooooooooooong time ago.
Do we know how the "recombining" of the picture works? If you use a lot of different cards it seems like you also have to use a Y or W DVI connector, for I cant see SLI/Crossfire working that well over several generations.
BTW checkout the PDF if you havent, seems legit :) http://lucidlogix.com/files/hydra_100_product_brief.pdf
Hmm, it also reads a bit like "any gpu from any vendor" but they never state clearly that you can mix vendors at the same time..
even above 100% scaling is possible if it offloads some of the processing to the hydra chip as well :s
I think you honestly make it up :D
The chip would have no knowledge about the cards. And a new card would ruin it quickly.
Personally I think Hydra will be good for consoles with locked HW. On a PC it will flop so hard it hurts. And as someone else pointed out. It smell of bitboys.
Plus lag and shuttering issues got potential to be very bad. Also the Hydra chip as such doesnt do anything that AMD/nVidia couldnt do..if it worked. It would be much more elegant in drivers. Both for fixes and future compability with DX, OpenGl and OpenCL versions. And ofcourse new cards and such.
Why cant the chip have knowledge of the capabilities of the GPU? Could be as simple as a GPU database with a gpu-z app running in the backend.
also, who says this wont have its own capabilities to stress a system and find out for itself what the gpuz like results would be.
this chip wasnt created by a bunch of idiots, we should have more faith in their ability to know the most basic questions, then if they are going to production with it, we can assume they have solved them too.
just think, who would buy this if there was a noticeable 1/2 second lag, or had worse scaling than cf/sli, or wasnt compatible with any video card except the few they built their own drivers for. if they cant get past those simple obvious things, then they will only cause their own failure.
What?? so you have to profile for every GPU variant, each of which has different bottlenecks? Then they have to do GPU architecture profiling, and the drivers are constantly changing.. there's no way you can set this in stone and expect it to work correctly indefinitely.
They won't have the ability to look into the graphics driver and know exactly what its doing or what work its bottlenecked by. You can't just dole out work based on what *you* deem to be x% of a frame's workload, because you have no idea how generation of each chip (or driver) will handle the work. A database of GPUs doesn't tell you anything about the internals or arch bottlenecks. Just basic info. 1 generations "SP" might not be equivalent to the next architectures "SP". Database is a naive approach.
that really wouldn't be that hard to do, processing units on gpus (simd, rop, texture units, bla bla) are not a secret and there are not that many models. They are already doing load balancing now so they obviously have a way to measure what a given gpu can do, a simple gpu-z type database could suffice. A new gpu comes out, Lucid updates the driver to tell it how much power and bandwidth the card has and you are back in busine ss.