Page 1 of 2 12 LastLast
Results 1 to 25 of 28

Thread: Nvidia's mainstream Fermi parts arrive in late Q1

  1. #1
    Xtreme Enthusiast
    Join Date
    Aug 2007
    Location
    Orange County, Southern California
    Posts
    583

    Arrow Nvidia's mainstream Fermi parts arrive in late Q1

    Nvidia's mainstream Fermi arrives in late Q1



    Launching in March if all goes well

    Nvidia's mainstream market version of its GT300 / Fermi chip is scheduled for a Q1 2010 launch. This is not particularly good news for the company as ATI is already shipping its mainstream card based on the Juniper chip at pricing around €130 / $169 for the more expensive parts. Of course, we are talking about the HD 5770 and HD 5750 that both started shipping yesterday.

    In perspective, Fermi will be released as a performance chip card, a high-end single chip card, and a dual-GPU card that might launch a few weeks later.

    When it comes to entry level and mainstream segments, we will most likely have to wait until around March, if not even later. Despite the huge flexibility of Nvidia's Fermi GF100 architecture, it still takes time to derive slower performing chips from the original design.

    We have a feeling that ATI might capture a much bigger piece of the DirectX 11 market than anyone expected due to being first to the market and shipping a month before its competition. Both of these factors are very beneficial for raking in good sales numbers. While Nvidia’s mainstream cards might end up faster, they will unfortunately come a month behind the competition.
    Source
    EVGA X58 SLI Classified E759 Limited Edition
    Intel Core i7 Extreme 980X Gulftown six-core
    Thermalright TRUE Copper w/ 2x Noctua NF-P12s (push-pull)
    2x EVGA GeForce GTX 590 Classified [Quad-SLI]
    6GB Mushkin XP Series DDR3 1600MHz 7-8-7-20
    SilverStone Strider ST1500 1500W
    OCZ RevoDrive 3 240GB 1.0GB/s PCI-Express SSD
    Creative X-Fi Fatal1ty Professional / Logitech G51 5.1 Surround
    SilverStone Raven RV02
    Windows 7 Ultimate x64 RTM



  2. #2
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Lancaster, UK
    Posts
    473
    God we need a separate area for all this stuff

    http://www.xtremesystems.org/forums/...50#post4062750
    CPU: Intel 2500k (4.8ghz)
    Mobo: Asus P8P67 PRO
    GPU: HIS 6950 flashed to Asus 6970 (1000/1400) under water
    Sound: Corsair SP2500 with X-Fi
    Storage: Intel X-25M g2 160GB + 1x1TB f1
    Case: Sivlerstone Raven RV02
    PSU: Corsair HX850
    Cooling: Custom loop: EK Supreme HF, EK 6970
    Screens: BenQ XL2410T 120hz


    Help for Heroes

  3. #3
    Wanna look under my kilt?
    Join Date
    Jun 2005
    Location
    Glasgow-ish U.K.
    Posts
    4,396
    Whats this "a month" chat? If March is true, ATI have 6!!! MONTHS of unrivalled DX11 sales.

    ATI... if somehow you read this.... DO NOT DARE **** away this lead. Most companies would kill for the advantage over a rival you have right now


    EDIT: Its mainstream Fermi. Not Fermi of any kind..... so not a 6 month lead at all. I'll leave this post up so people can point and laugh at me if they want
    Quote Originally Posted by T_M View Post
    Not sure i totally follow anything you said, but regardless of that you helped me come up with a very good idea....
    Quote Originally Posted by soundood View Post
    you sigged that?

    why?
    ______

    Sometimes, it's not your time. Sometimes, you have to make it your time. Sometimes, it can ONLY be your time.

  4. #4
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    He was talking about mainstream - Cedar and Redwood. They're Q1 I think.

  5. #5
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    Nope, Juniper is also mainstream. Midrange actually.

    159 and 129$ are mainstream prices.

    Cedar and Redwood are low end.

    So ati does have 6 months lead in the DX11 mainstream market

  6. #6
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Is anyone really surprised?
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  7. #7
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Despite the huge flexibility of Nvidia's Fermi GF100 architecture, it still takes time to derive slower performing chips from the original design.
    that statement makes no sense and is contradicting with itself...
    if gt300 sports HUGE flexibility... then why does it take so long to cut it down?
    then what would you call atis rv8xx? it sports MONSTROUS flexibility? cause they could launch fully fledged and cut down parts within weeks from each other... not a quarter "BEST CASE" scenario...

    im really curious why it takes so long...
    i cant believe that its an engineering thing, it looks more like bad planning and/or execution...

  8. #8
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by saaya View Post
    that statement makes no sense and is contradicting with itself...
    if gt300 sports HUGE flexibility... then why does it take so long to cut it down?
    then what would you call atis rv8xx? it sports MONSTROUS flexibility? cause they could launch fully fledged and cut down parts within weeks from each other... not a quarter "BEST CASE" scenario...

    im really curious why it takes so long...
    i cant believe that its an engineering thing, it looks more like bad planning and/or execution...
    That's not the only reason. Another reason is they want more people to buy the more expensive high end part. GT200 is very flexible as well. Look at the GT216 (used in GT220) with its ~100mm^2 die size and DX10.1 support.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  9. #9
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    anyone else notice he said we should see a fermi x2 within a few weeks of fermi's release?

    Fermi will be released as a performance chip card, a high-end single chip card, and a dual-GPU card that might launch a few weeks later
    100% not possible.

  10. #10
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Manicdan View Post
    anyone else notice he said we should see a fermi x2 within a few weeks of fermi's release?

    100% not possible.
    It is still way too late because Fermi itself is late.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  11. #11
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Quote Originally Posted by saaya View Post
    that statement makes no sense and is contradicting with itself...
    if gt300 sports HUGE flexibility... then why does it take so long to cut it down?
    yeah, really... What's the definition of "huge flexibility" anyway, if it takes too long to flex it, how come it's hugely flexible??
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  12. #12
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by saaya View Post
    im really curious why it takes so long...
    i cant believe that its an engineering thing, it looks more like bad planning and/or execution...
    It is an engineering/design thing...
    I wish we knew more info about how exactly they design the lower end chips. I have a few theories but nothing more than that.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  13. #13
    Xtreme Enthusiast
    Join Date
    Aug 2007
    Location
    Orange County, Southern California
    Posts
    583
    Quote Originally Posted by Manicdan View Post
    anyone else notice he said we should see a fermi x2 within a few weeks of fermi's release?



    100% not possible.
    How is it not possible to take two performance chips and slap them both on a single PCB? We've already seen it done three times (GX2s, GTX 295s).
    EVGA X58 SLI Classified E759 Limited Edition
    Intel Core i7 Extreme 980X Gulftown six-core
    Thermalright TRUE Copper w/ 2x Noctua NF-P12s (push-pull)
    2x EVGA GeForce GTX 590 Classified [Quad-SLI]
    6GB Mushkin XP Series DDR3 1600MHz 7-8-7-20
    SilverStone Strider ST1500 1500W
    OCZ RevoDrive 3 240GB 1.0GB/s PCI-Express SSD
    Creative X-Fi Fatal1ty Professional / Logitech G51 5.1 Surround
    SilverStone Raven RV02
    Windows 7 Ultimate x64 RTM



  14. #14
    Xtreme Enthusiast
    Join Date
    Jun 2005
    Posts
    960
    Quote Originally Posted by AuDioFreaK39 View Post
    How is it not possible to take two performance chips and slap them both on a single PCB? We've already seen it done three times (GX2s, GTX 295s).
    PCI-E power limitation, for instance.

  15. #15
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Piotrsama View Post
    PCI-E power limitation, for instance.
    thats what i was thinking. im expecting this to use 400W unless massively underclocked

  16. #16
    Registered User
    Join Date
    Mar 2008
    Location
    smallbany, ny
    Posts
    88
    Quote Originally Posted by AuDioFreaK39 View Post
    How is it not possible to take two performance chips and slap them both on a single PCB? We've already seen it done three times (GX2s, GTX 295s).
    weren't the gx2's dual pcb, not single?
    asus p5q pro p45
    e8400 @ 3.6
    g.skill 4 x 2 gb ddr2 1066
    gigabtye 4850

  17. #17
    Xtreme Enthusiast
    Join Date
    Aug 2007
    Location
    Orange County, Southern California
    Posts
    583
    Quote Originally Posted by Flanman View Post
    weren't the gx2's dual pcb, not single?
    The latest dual chip card is single-PCB, but that's not the point.
    EVGA X58 SLI Classified E759 Limited Edition
    Intel Core i7 Extreme 980X Gulftown six-core
    Thermalright TRUE Copper w/ 2x Noctua NF-P12s (push-pull)
    2x EVGA GeForce GTX 590 Classified [Quad-SLI]
    6GB Mushkin XP Series DDR3 1600MHz 7-8-7-20
    SilverStone Strider ST1500 1500W
    OCZ RevoDrive 3 240GB 1.0GB/s PCI-Express SSD
    Creative X-Fi Fatal1ty Professional / Logitech G51 5.1 Surround
    SilverStone Raven RV02
    Windows 7 Ultimate x64 RTM



  18. #18
    Registered User
    Join Date
    Mar 2008
    Location
    smallbany, ny
    Posts
    88
    Quote Originally Posted by AuDioFreaK39 View Post
    The latest dual chip card is single-PCB, but that's not the point.
    i realize that much but you claimed it has already been done three times, using the gx2's as example. was just pointing it out.
    asus p5q pro p45
    e8400 @ 3.6
    g.skill 4 x 2 gb ddr2 1066
    gigabtye 4850

  19. #19
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    what was the most power hungry single card that was turned into a duel? and should the g300 be higher or lower in tdp than that card?

  20. #20
    Xtreme Enthusiast
    Join Date
    Jan 2008
    Posts
    614
    Well now the question becomes: will this delay 5870X2? Not that there's that much going on in the gaming market by Q1's end.
    Modded Cosmos. | Maximus II Formula. Bios 1307| 2x2 Mushkin XP ASCENT 8500 | Q9550-E0- 4.10 + TRUE | Visiontek HD4870X2 | LN32A550 1920x1080 | X-FI Extreme Gamer | Z5300E | G15v.1 | G7 | MX518 | Corsair HX1000 | X25-MG2 80G | 5xHDD
    ____________________________________
    Quote Originally Posted by saaya View Post
    most people dont care about opencl, physix, folding at home and direct compute... they want cool explosions and things blowing up and boobs jumping around realistically... .

  21. #21
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by 003 View Post
    That's not the only reason. Another reason is they want more people to buy the more expensive high end part. GT200 is very flexible as well. Look at the GT216 (used in GT220) with its ~100mm^2 die size and DX10.1 support.
    yeah, very flexible... it only took them 14 months to come up with GT216 after GT200 was launched... thats amazingly flexible!

    seriously, bad example man...

  22. #22
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    Quote Originally Posted by AuDioFreaK39 View Post
    The latest dual chip card is single-PCB, but that's not the point.
    The point if anything is all of Nvidia's dual gpu cards came out much later than the launch of the initial architecture ( eg G80 then G92 - 9800X2 then GT200 - GT200b to GTX295 ) If they truly wanted to offer a dual gpu card early on they *could* but using the current process it would be crippled and running at highly reduced clocks which is less than optimal ( plus that approach leaves no room to counter the competition if they decided to do a refresh themselves ) I fully expect Nvidia to market the living crap out of CUDA like never before given they won't be able to claim the performance crown until a dual gpu card is feasible. If you can't beat the competition, just convince everyone that your product is better... for reasons they likely don't need to care about XD

    I am expecting a potential November paper launch with late December availability *best* case ( more likely January though )

    Still, they took way longer to scale down GT200, so if this is true, that is good news for them. G92 has lasted them the longest of any architecture ever but they can't beat that dead horse any longer.
    Last edited by Chickenfeed; 10-14-2009 at 06:57 PM.
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  23. #23
    Banned
    Join Date
    Feb 2009
    Posts
    165
    Quote Originally Posted by RagzaroK View Post
    Well now the question becomes: will this delay 5870X2? Not that there's that much going on in the gaming market by Q1's end.
    Highly doubt it. It would be a smart move to take advantage of the lead ATi has at the moment. Why save your enthusiast chip until your competitor releases theirs? Use the time to attract impatient enthusiasts from the green side to potentially increase the amount of loyal customers.

    Atleast that's how I see it.

  24. #24
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by saaya View Post
    yeah, very flexible... it only took them 14 months to come up with GT216 after GT200 was launched... thats amazingly flexible!

    seriously, bad example man...
    Uhh no not really. It's a great example. Nvidia really hasn't had a need to shrink down the GT200 because of G92, but when the OEMs wanted DX10.1 support to have on paper, it was easier to rework GT200 to support it.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  25. #25
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by 003 View Post
    Uhh no not really. It's a great example. Nvidia really hasn't had a need to shrink down the GT200 because of G92, but when the OEMs wanted DX10.1 support to have on paper, it was easier to rework GT200 to support it.
    right, but oems wanted that last christmas or maybe in early 2009, and its not like they only told them then, they probably told nvidia in spring 2008 that they want 10.1 in winter 2008 or spring 2009... but it took nvidia until NOW to finally get it done. so again, how is that flexible?

    how can you say a product that took more than a year from concept to retail is a good example of how flexible the architecture its based on is? 2 or less quarters is already not exactly flexible... 1 quarter or less is kinda flexible... 2 months or less is VERY flexible...

    or what, are you claiming nvidia only started working on gt2xx 6 months ago? sure...

    you cant really compare gt200 and gt2xx with rv870 and rv840 though, cause rv840 is a cut down rv870, gt2xx is a reworked gt200... they added a new imc, dx10.1 and probably tweaked something else... thats obviously going to take a lot more work and time to do... the problem is that nvidia hasnt done any cut down chips in ages, they always did reworks... they always changed the architecture and didnt just chop some parts of, which takes a lot more time... a chopped down gt200 might be possible within 2 months like ati did it, yes, gt200 MIGHT very well be very flexible... but how could we know? even nvidia cant know cause they never even tried it afaik... so how can you say gt200 is flexible if they never created a cut down part of it and brought it to market quickly... they did a very slow rework... drawing a conclusion from that to how fast they are at cutting down a chip by chopping some bits off is impossible... but if anything youd think itll take them a while as their rework wasnt exactly a fast one...

    this might be the main issue they are facing actually, as that dragged their entire roadmap along and slowed everything down...
    instead of cutting a chip down and THEN reworking/refreshing it, they always did both at the same time... they basically tried to do a tick and tock at the same time... and thats really stupid... both intel and amd learned a while ago that you really dont want tick and tock at the same time...
    Last edited by saaya; 10-14-2009 at 08:35 PM.

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •