All it says is that its "compatible with frame pacing technology". I don't know where they made the leap to "this technology basically fixes most of the AMD’s frames pacing problems".
likely its not that removing it had not performance penalty, but keeping it , did hurt performance.
maybe they found the low latency, low bandwhich crossfire link was harming performance. unless they create a 16x pci-e on the top for the crossfire , it doesn't actually make any sense
now that they're trying to prevent lag, micro stuttering. if you have a lower latency bus for crossfire. you introduce a calculation differential .
does 1x pci-e bandwhich really make any difference.
sounds like crossfire left the old realm, of copy all to both cards, process 1/2 on each. to how that hybrid ati + nvidia card tech was gonna work.
maybe this will also result in closer to 100% scaling now.
even ati said in 4870's time that the crossfire link had become pretty useless and only became a bus used to say "hello" and initiate crossfire.
MM Duality eZ modded horizon (microres bracket). AMD 8120 4545Mhz 303x15 HTT 2727 1.512v load. 2121Mhz 1.08v idle. (48hour prime95 8k-32768 28GB ram) 32GB GeIL Cosra @ RAM 1212Mhz 8-8-8. 4870x2 800/900 load 200/200 idle. Intel Nic. Sabertooth 990fx . 4x64GB Crucial M4 raid 0 . 128GB Samsung 840 pro. 128GB OCZ Vertex 450. 6x250GB Seagate 7200.10 raid 0 (7+ years still running strong) esata raid across two 4 bay sans digital. Coolit Boreas Water Chiller. CoolerMaster V1000. 3x140MM back. 1x120MMx38MM back. 2x120MMx38MM Front. 6x120MM front. 2x120MM side. silverstone fan filters. 2x120MMx38MM over ram/PWM/VRM , games steam desura origin. 2x2TB WD passport USB 3.0 ($39 hot deal score) 55inch samsung 1080p tv @ 3 feet. $30 month equal payments no int (post xmas deal 2013)
AMD are certainly bringing some interesting new ideas to there hardware. I'm really interested to see how all these ideas pan out. We could really see some of the best performance gains over the next couple of years. Interesting times ahead indeed.
Remember they now support Eyefinity with 4K displays @60 fps (the Dirt 3 video)... I think that's the reason about why they just don't update the CF bridge
I don't know about that. If you take a modern high end video card and run Heaven on a Z67 board, and then pop in a sound card or something into the bottom pci-e slot which will drop the top slot to x8 you'll lose a few points. I noticed that with a GTX680. Wasn't a huge difference but I wouldn't be shocked if the difference were larger with titan.
Nvidia's sli doesn't have that issue and the bridge does provide more bandwith. They could have gone that route.
I'm just disappointed that no one is really going to test this. Or if they do it'll be like anandtech's awful pci-e 3.0 comparison on Titan where they decided to use X58 with a full 16 lanes per card.
Last edited by BababooeyHTJ; 09-30-2013 at 01:49 AM.
A few points of drop pretty much fits in with the 8x to 16x difference that happens on any card.
All along the watchtower the watchmen watch the eternal return.
CrossDMA is the same as CrossFire bridge. Old bridge was ONE PCie 1.1 line, now is ONE from 16 PCIe 3.0 dedicated to sync, function is the same like bridge.
16x PCIe 3.0/8x PCIe 3.0 no performance hit, 16x PCIe 2.0 no performance hit, 8x PCIe 2.0 maybe minimal performance hit, but i think very very small
What telling you, Nvidia is not taking the same road as AMD for Maxwell ? Dont forget last gpus from Nvidia are still Kepler based ... I have the feeling it is even a need for 4K display and multiple monitors.
Anyway , the majority of data ( > 90% ) for CFX and SLI are allready moved through PCI-Express. Im not even sure the difference could be quantifiable in %...
The way the gpus are handle information through PCI express is changing now, with GPGPU new features, including virtual memory access, tiled resources etc etc...
AMD is not crazy, if it was have an negative impact and sacrify performance, they will have just keep the bridge.. lol
Last edited by Lanek; 09-30-2013 at 02:55 AM.
CPU: - I7 4930K (EK Supremacy )
GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
Motherboard: Asus x79 Deluxe
RAM: G-skill Ares C9 2133mhz 16GB
Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0
I've always been told that sli bridge handles more bandwith. Why do you think that you need a special bridge for tri-sli? You're comparing apples to oranges.
Thank you, my question is why not update the crossfire bridge to something that can handle a little more bandwith?
Thats not what you said before. My point is that its already happing on a slower card thats not putting excess stress on the pci-e lanes for multi-gpu sync.
Last edited by BababooeyHTJ; 09-30-2013 at 12:56 PM.
No, its the same.
To cause a significant drop requires a system bottleneck, typically the HDD during streaming or GPU ram if you're running your resolution/AA way high.
Switching from 16x to 8x only shows a small decrease in performance typically due to DMA request depth since you have fewer lines in parallel and have to stack requests deeper in series to make up for it. A small latency hit at most, no bandwidth issues as long as you arent running into the bottleneck issues, which is fairly hard on a single card with 3gb or more of memory at under 1600p.
All along the watchtower the watchmen watch the eternal return.
Been out of the loop for awhile. Really exciting to read about what AMD has planned and is doing with this new gen. Can't wait to upgrade my 6950. (It pays to skip a generation or 2 to get that wow feeling). I remember that AFR rendering slide presentation AMD did when the 4870x2 first came out. If AMD has figured out a more effective and efficient way to render and improve crossfire even the haters have to tip their hats. Since I already preordered my PS4, take more of my money please AMD!
And mantle...oh my!
I will be an early adopter.. anyone else?
Last edited by kadozer; 09-30-2013 at 09:56 PM.
http://www.overclock.net/t/1428344/v...#post_20890405Why would you need it? It needs to be able to transfer that much data in that much time. While more speed and lanes will get it there faster, it only needs to be there by the deadline, which for a 60Hz monitor is 30 times every second for 2-card AFR.
For a 4k 60hz screen, it needs to be able to transfer a full 34MB of data every 33 milliseconds to get there by the deadline.
PCI-e 2.0 x8 (4GB/s) can transfer that in 8 milliseconds.
PCI-e 2.0 x8 can transfer one 1080p frame in 2 milliseconds.
Theoretically speaking, a PCI-e 2.0 x8 has enough speed to handle 4k eyefinity at 60hz. It would be like running crossfire in a PCI-e x2 slot, but you could. Practically speaking, you want a 2.0 x16 or 3.0 x8. Even at this most extreme of resolutions you still don't need a PCI-e 3.0 x16
oh, and because math:
PCI-e 3.0 x16 (16GB/s) can transfer 4k 60hz in 2 milliseconds.
PCI-e 3.0 x16 can transfer one 1080p frame in 0.5 milliseconds.
Pretty simple math to work out how much other resolutions and refresh rates will need.
Also there's that video of Dirt 3 on 3x4K displays on a pair of cards in crossfire. Seems to work fine.
All along the watchtower the watchmen watch the eternal return.
4670k 4.6ghz 1.22v watercooled CPU/GPU - Asus Z87-A - 290 1155mhz/1250mhz - Kingston Hyper Blu 8gb -crucial 128gb ssd - EyeFunity 5040x1050 120hz - CM atcs840 - Corsair 750w -sennheiser hd600 headphones - Asus essence stx - G400 and steelseries 6v2 -windows 8 Pro 64bit Best OS used - - 9500p 3dmark11(one of the 26% that isnt confused on xtreme forums)
I dont know why, i find some information a bit strange in this: No dynamic clocking, but can reduce clock.. I dont think AMD is back to 6000 series era. and the 288GB/s, like if AMD will have developp a 512bit MC for get the same bandwith of the 7970...
Possible, but like they was not so much information during the presentation for non NDA part outside: > 300GB/s and " > 5Tflops " .. you need 885mhz with 2816SP for match 5.0 Tflops and their slide said higher of 5Tlfops. without saying with this clock speed, you are far, really far of the Triangle rate given by AMD.
![]()
Last edited by Lanek; 10-01-2013 at 01:44 AM.
CPU: - I7 4930K (EK Supremacy )
GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
Motherboard: Asus x79 Deluxe
RAM: G-skill Ares C9 2133mhz 16GB
Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0
Charts... Titan's minimum fps is higher than average...
Chart #2... Titan's minimum fps & average fps = exactly the same.
Sure about the validity of those "tests"![]()
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
Being color blind it's hard to see which line is which in the graph, I can see the difference in color of the labels but in the graph it's just useless I just can't read that.
Something like blue and red would of worked.
I see that alot lately, people put together these graphs with half the colors being so close together I can't tell them apart when they are all squiggly lines in a graph.
I don't mean it as a rant but I'm just saying, use colors that aren't so close together in the spectrum guys...
Time to release my Asus 7970 TOP
Intel Core i5 6600K + ASRock Z170 OC Formula + Galax HOF 4000 (8GBx2) + Antec 1200W OC Version
EK SupremeHF + BlackIce GTX360 + Swiftech 655 + XSPC ResTop
Macbook Pro 15" Late 2011 (i7 2760QM + HD 6770M)
Samsung Galaxy Note 10.1 (2014) , Huawei Nexus 6P
[history system]80286 80386 80486 Cyrix K5 Pentium133 Pentium II Duron1G Athlon1G E2180 E3300 E5300 E7200 E8200 E8400 E8500 E8600 Q9550 QX6800 X3-720BE i7-920 i3-530 i5-750 Semp140@x2 955BE X4-B55 Q6600 i5-2500K i7-2600K X4-B60 X6-1055T FX-8120 i7-4790K
http://www.3dmark.com/3dm/1255142
A probably score for the 290X...
Some comments I see on Beyond3D (source of the futuremark link)
Graphics score: 10882, almost 1000 more than titan. 9238 total score.
If this was true, why in AMD slide it only showed near 8000 for R9-290X?.I dont know if it is true or not.. ( i mostly think it is a fake, could be overclocked etc ).. But i will not too much base myself on what we have seen on the presentation about performance.. They was really not give any real precise numbers.. only some hint ( higher of, more of ... )Memory reported = 3.072 MB. Doesn't R9 290X ship with 4 GB?GTX Titan show 3gb in 3dmark too.W1zzard fix the clock info in the TPU post32bit Vs. 64bit related problem
http://www.techpowerup.com/forums/sh...8&postcount=16
The 290X result shown above is likely fake. However, that's not to say the 290X won't reach those levels.![]()
My (sold) eVGA GTX 780 SC ACX pulled more than that with a memory OC and core at effectively what it boosted stock, (I installed a TI bios so it wouldn't throttle ever, but the out-of-box boost was ~1137 and I ran it at 1150 24/7). It was hitting ~11200 graphics score with just that... not impressed unless this thing oc's like a beast and thus is able to go higher than an oc'd 780 (core+mem) would, or it comes in at a notably nicer price (excluding the paltry limited-availability BF4 bundle edition that virtually no one will get if the rumors of 8k units are true worldwide). Particularly considering the missing features and that 780s now are able to be overvolted for great core clocks.
EDIT: And, it's entirely possible I'm getting jaded at companies just managing to match eachother months later either way it swings, but if a card coming out several months later isn't appreciably better in some way, I just shrug my shoulders at this point.
Last edited by GoldenTiger; 10-01-2013 at 09:27 AM.
Bookmarks