MMM
Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 72

Thread: AMD shows 28nm wafer GlobalFoundries to meet new card?

  1. #26
    Xtreme Addict
    Join Date
    Jul 2008
    Location
    US
    Posts
    1,379
    Quote Originally Posted by trinibwoy View Post
    3870 was a midrange GPU.
    I believe you're mistaken. 3870 was top end when it was released, and remained there until the 3870 X2 was released.

    --Matt
    My Rig :
    Core i5 4570S - ASUS Z87I-DELUXE - 16GB Mushkin Blackline DDR3-2400 - 256GB Plextor M5 Pro Xtreme

  2. #27
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    The RV670 was presented like AMD's first piece of silicon from their "sweet point" strategy as a term for small sized die. RV770 was slightly behind this line. RV870 is so much behind AMD had to try 40 nm on something else.

  3. #28
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by mattkosem View Post
    The 3870 was the first product on 55nm, was it not?

    --Matt
    as far as i recall yes, it was out very shortly after the 2900

    Quote Originally Posted by trinibwoy View Post
    3870 was a midrange GPU.
    midrange to what? performance was very close to the 2900, but used half the power. the 4870 did beat it, and was bigger and used more power, but came out months later

  4. #29
    Xtreme Enthusiast
    Join Date
    Sep 2007
    Posts
    746
    Quote Originally Posted by Manicdan View Post
    as far as i recall yes, it was out very shortly after the 2900

    midrange to what? performance was very close to the 2900, but used half the power. the 4870 did beat it, and was bigger and used more power, but came out months later
    The 3870 came out 6 months after the 2900, and was a refresh of the R600 on 55nm. Then the 4800's came about 6-8 months after that. Ati was on a pretty consistent release cycle when the process technology was ahead of them...now it seems they're ahead of it.

  5. #30
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Manicdan View Post
    not sure they could do that this time. they already use 128bit on their mid range cards, they cant get that much lower. if anything they might ONLY make mobile gpus on 28nm to test it out, where 64bit super slow gddr5 wont be missed as much.
    Remember, they didn't exactly go with the largest bus possible for the die size. A 28nm shrink of Cypress they are probably aiming for ~200mm2 and have some room to play with some specs while being able to fit a 256bit bus on there.
    28nm Juniper is looking closer to ~100mm2, so they might have to throw some clusters on there to get a 128bit bus.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  6. #31
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    Well, they did at least architecture progress. NVIDIA just added more and more shaders and shrinked. Now they are supposed to release new architecture, but oh, it's problem, they forgot how to do it after those long years or what

    On the other hand, AMD does weird things too, they try new things with every new line and sometimes just throw them away right in the another new line. Like they've been just experimenting.

    As for 28 nm, I don't think AMD will shrink. They don't do this, they'll more probably release new cards. It's wasting time when you only shrink, I think they learned from NVIDIA. I'd say second half of 2010 is long enough after HD 5000 lanch when you compare it with HD 3000/HD 4000/HD 5000.
    Last edited by Behemot; 01-12-2010 at 01:19 PM.

  7. #32
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Quote Originally Posted by Behemot View Post
    Well, they did at least architecture progress. NVIDIA just added more and more shaders and shrinked. Now they are supposed to release new architecture, but oh, it's problem, they forgot how to do it after those long years or what
    I don't agree. I think it was more like AMD who did that - take 320 sp's, first shrink it, then shrink + 800, then shrink + 1600.

    Nvidia didn't even bother adding SP's to 8800 GTX until the GT200 series. They just shrank once. Of course, that they didn't add shaders doesn't mean they improved a lot in architecture
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  8. #33
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    would a shrink of r800 really give them any information on how build the r900 at 28nm?

  9. #34
    Xtreme Addict
    Join Date
    Feb 2006
    Location
    Vienna, Austria
    Posts
    1,940
    Quote Originally Posted by annihilat0r View Post
    I don't agree. I think it was more like AMD who did that - take 320 sp's, first shrink it, then shrink + 800, then shrink + 1600.

    Nvidia didn't even bother adding SP's to 8800 GTX until the GT200 series. They just shrank once. Of course, that they didn't add shaders doesn't mean they improved a lot in architecture
    not really

    amd added direct x 10.1; 11 in the progress; made huge improvements in opencl performance; added additional dispay support; redesigned the UVD and power savings; added GDDR5

    AMD: DX 10 > 10.1 > 11 (2900 - 3870/4870 - 5870)
    Nvidia: DX 10 > 10 > 10 (8800-9800-280)

    the last real core improvement with nvidia was G80-G92 with much faster CUDA performance (G80 sucks in CUDA)

    i just wish that they finally come out with fermi but considering the fact that there were no news about it for almost 2 weeks things dont look too good...
    Core i7 2600k|HD 6950|8GB RipJawsX|2x 128gb Samsung SSD 830 Raid0|Asus Sabertooth P67
    Seasonic X-560|Corsair 650D|2x WD Red 3TB Raid1|WD Green 3TB|Asus Xonar Essence STX


    Core i3 2100|HD 7770|8GB RipJawsX|128gb Samsung SSD 830|Asrock Z77 Pro4-M
    Bequiet! E9 400W|Fractal Design Arc Mini|3x Hitachi 7k1000.C|Asus Xonar DX


    Dell Latitude E6410|Core i7 620m|8gb DDR3|WXGA+ Screen|Nvidia Quadro NVS3100
    256gb Samsung PB22-J|Intel Wireless 6300|Sierra Aircard MC8781|WD Scorpio Blue 1TB


    Harman Kardon HK1200|Vienna Acoustics Brandnew|AKG K240 Monitor 600ohm|Sony CDP 228ESD

  10. #35
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Manicdan View Post
    would a shrink of r800 really give them any information on how build the r900 at 28nm?
    Depends on the architecture and it also depends on if they are refreshing at the end of the year @ 28nm and then release a new architecture in 2011. It could go either way.

    The real question is, do they really want to try and release a completely new architecture on a new process node with no small test GPUs first?
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  11. #36
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Burbank, CA
    Posts
    563
    Quote Originally Posted by mattkosem View Post
    I believe you're mistaken. 3870 was top end when it was released, and remained there until the 3870 X2 was released.

    --Matt
    The 3870 was never a high end card, it might have been to ATI, but the rest of the enthusiast world did not consider it a high end card. The 8800GT was beating it hands down in every game tested. 8800GT was a $200 card at the time. The 3800 still struggled with AA.

  12. #37
    Xtreme Enthusiast
    Join Date
    Nov 2009
    Posts
    511
    ^ regardless of what it was...ati produced the 3870 as their high end card..just like with their cpus now..just because they arent beating intel's comp..doesnt mean the black edition aren't their high end cpu's..

  13. #38
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by LordEC911 View Post
    Depends on the architecture and it also depends on if they are refreshing at the end of the year @ 28nm and then release a new architecture in 2011. It could go either way.

    The real question is, do they really want to try and release a completely new architecture on a new process node with no small test GPUs first?
    and from a new plant, its a new everything, except the word ATI on the front

  14. #39
    Xtreme Addict
    Join Date
    Jul 2008
    Location
    US
    Posts
    1,379
    Quote Originally Posted by HelixPC;
    The 3870 was never a high end card, it might have been to ATI, but the rest of the enthusiast world did not consider it a high end card. The 8800GT was beating it hands down in every game tested. 8800GT was a $200 card at the time. The 3800 still struggled with AA.
    It was indeed high end in the ATI lineup, replacing the 2900XT. The status of nvidias products at the time is irelevant. The 5870 doesn change the position of the gtx285 in nvidia's lineup, does it? Nor does the i7 on the phenom II line. Each manufacturer has their own product line, and at the time the 3870 was at the top of ATI's.

    --Matt
    My Rig :
    Core i5 4570S - ASUS Z87I-DELUXE - 16GB Mushkin Blackline DDR3-2400 - 256GB Plextor M5 Pro Xtreme

  15. #40
    Xtreme X.I.P. Particle's Avatar
    Join Date
    Apr 2008
    Location
    Kansas
    Posts
    3,219
    Quote Originally Posted by mattkosem View Post
    It was indeed high end in the ATI lineup, replacing the 2900XT. The status of nvidias products at the time is irelevant.

    --Matt
    That's awfully presumptuous. The decision of what product to test on a new process could have been any of the following:

    1) Mainstream by own company standards (internal product selection)
    2) Mainstream by global standards (market-based product selection if they're aiming for a certain kind of product range)
    3) Mainstream by die size (technical product selection)

    Nobody knows which of these is true, and #3 is the most likely.

    Try it on something small/easier and also cheap. Then, sell it as a market mainstream so it actually makes you some money that late in the product generation cycle. That's what I would do in that position as it makes a lot of sense both from a business and technical point of view compared to other options. It would hardly make sense to shrink a massive chip to a new process late in the cycle unless they're still selling them by the truckload. Mainstream parts move a lot more units.
    Last edited by Particle; 01-12-2010 at 02:51 PM.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Rule 3:
    When it comes to computer news, 70% of Internet rumors are outright fabricated, 20% are inaccurate enough to simply be discarded, and about 10% are based in reality. Grains of salt--become familiar with them.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

    Random Tip o' the Whatever
    You just can't win. If your product offers feature A instead of B, people will moan how A is stupid and it didn't offer B. If your product offers B instead of A, they'll likewise complain and rant about how anyone's retarded cousin could figure out A is what the market wants.

  16. #41
    Xtreme Addict
    Join Date
    Jan 2009
    Posts
    1,445
    Quote Originally Posted by Particle View Post
    That's awfully presumptuous. The decision of what product to test on a new process could have been any of the following:

    1) Mainstream by own company standards (internal product selection)
    2) Mainstream by global standards (market-based product selection if they're aiming for a certain kind of product range)
    3) Mainstream by die size (technical product selection)

    Nobody knows which of these is true, and #3 is the most likely.

    Try it on something small/easier and also cheap. Then, sell it as a market mainstream so it actually makes you some money that late in the product generation cycle. That's what I would do in that position as it makes a lot of sense both from a business and technical point of view compared to other options. It would hardly make sense to shrink a massive chip to a new process late in the cycle unless they're still selling them by the truckload. Mainstream parts move a lot more units.

    ....i do not see how what you quoted relates to your post? did you miss quote another forum member perhaps?
    [MOBO] Asus CrossHair Formula 5 AM3+
    [GPU] ATI 6970 x2 Crossfire 2Gb
    [RAM] G.SKILL Ripjaws X Series 16GB (4 x 4GB) 240-Pin DDR3 1600
    [CPU] AMD FX-8120 @ 4.8 ghz
    [COOLER] XSPC Rasa 750 RS360 WaterCooling
    [OS] Windows 8 x64 Enterprise
    [HDD] OCZ Vertex 3 120GB SSD
    [AUDIO] Logitech S-220 17 Watts 2.1

  17. #42
    Xtreme Addict
    Join Date
    Jul 2008
    Location
    US
    Posts
    1,379
    Quote Originally Posted by Particle View Post
    That's awfully presumptuous. The decision of what product to test on a new process could have been any of the following:

    1) Mainstream by own company standards (internal product selection)
    2) Mainstream by global standards (market-based product selection if they're aiming for a certain kind of product range)
    3) Mainstream by die size (technical product selection)

    Nobody knows which of these is true, and #3 is the most likely.

    Try it on something small/easier and also cheap. Then, sell it as a market mainstream so it actually makes you some money that late in the product generation cycle. That's what I would do in that position as it makes a lot of sense both from a business and technical point of view compared to other options. It would hardly make sense to shrink a massive chip to a new process late in the cycle unless they're still selling them by the truckload. Mainstream parts move a lot more units.
    I think you read more than I said. I never made any claims about their selection process. I merely pointed out the fact that the first product on the 55mm node was, at the time, the highest end product on the ATI product line.

    The section that you quoted was in response to someone stating that the 3870 was not a high-end card, which was obviously not the case in their product line. Am I missing something?

    --Matt
    My Rig :
    Core i5 4570S - ASUS Z87I-DELUXE - 16GB Mushkin Blackline DDR3-2400 - 256GB Plextor M5 Pro Xtreme

  18. #43
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    Quote Originally Posted by mattkosem View Post
    I merely pointed out the fact that the first product on the 55mm node was, at the time, the highest end product on the ATI product line.
    Yeah, but it doesn't matter if it was high-end or whatever. As I pointed, it was part of their "sweet point" strategy, it was small (and 55nm proces had no such problems like 40nm), so no reason why to manufacture RV620 or RV610 sooner than RV670.

    Not mentioning they needed something to beat NVIDIA at those times. HD 2900 XT was not bad compared to GTS 320/640 MB, the power consumption was pretty much the same, but you know, less is always better. For that reason RV670 came first.

  19. #44
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    those wafer shots show a test chip, im pretty sure...
    its not a gpu or cpu, its way to heterogenous for that...

  20. #45
    Xtreme X.I.P. Particle's Avatar
    Join Date
    Apr 2008
    Location
    Kansas
    Posts
    3,219
    I don't honestly understand how there can be any confusion regarding my post, but if nobody else seems to get it I'll just leave it alone.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Rule 3:
    When it comes to computer news, 70% of Internet rumors are outright fabricated, 20% are inaccurate enough to simply be discarded, and about 10% are based in reality. Grains of salt--become familiar with them.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

    Random Tip o' the Whatever
    You just can't win. If your product offers feature A instead of B, people will moan how A is stupid and it didn't offer B. If your product offers B instead of A, they'll likewise complain and rant about how anyone's retarded cousin could figure out A is what the market wants.

  21. #46
    Xtreme Member
    Join Date
    Oct 2008
    Location
    Colorado
    Posts
    312
    I might thing that its something that AMD had global foundries do because Intel just released the 32nm and then AMD can go and say we have 28nm in your face Intel. They had one wafer made remember I doubt anything on that wafer is working, it is the first one. and I am also pretty sure that they are releasing bulldozer the new architecture on 32nm and this just says that they really have better R&D now and that they are truly gunning for their old crown again. this means that the released from AMD will be getting faster and faster but also better and better almost equaling Intels speed. But one thing I think AMD should focus on improving is their manufacturing processes quality. Intel still has the best manufacturing process.

  22. #47
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    Bulldozer will be 32nm SOI, this has nothing to do with AMD shoving 28 nm waffers in Intel's face. stop thinking in a childish way.


    28nm will not be adressed to CPU's, after 32nm we will probably see GF deveoping 22nm, otherwise we cannot imagine AMD being competitive agaisnt Ivy Bridge (refresh of Sandy Bridge in 22nm).

    PS: Intel showed 22nm waffers as well, so the smaller these companies go, the better the products we will buy will be.

  23. #48
    Xtreme Member
    Join Date
    Oct 2008
    Location
    Colorado
    Posts
    312
    This is something I did not know so now I say thank you.

  24. #49
    Xtreme Mentor
    Join Date
    Apr 2005
    Posts
    2,550
    RV670@55nm, WAS NOT shrink of R600@80nm! It was new design. RV740@40nm WAS NOT shrink of RV770@55nm it was new design. Yes NEW DESIGN did keep same architecture philosophy, but that's about only thing same.

    Beside CPUs, AMD do have APUs
    Adobe is working on Flash Player support for 64-bit platforms as part of our ongoing commitment to the cross-platform compatibility of Flash Player. We expect to provide native support for 64-bit platforms in an upcoming release of Flash Player following the release of Flash Player 10.1.

  25. #50
    Xtreme Addict
    Join Date
    Feb 2006
    Location
    Vienna, Austria
    Posts
    1,940
    Quote Originally Posted by =SOC= Admiral View Post
    I might thing that its something that AMD had global foundries do because Intel just released the 32nm and then AMD can go and say we have 28nm in your face Intel. They had one wafer made remember I doubt anything on that wafer is working, it is the first one. and I am also pretty sure that they are releasing bulldozer the new architecture on 32nm and this just says that they really have better R&D now and that they are truly gunning for their old crown again. this means that the released from AMD will be getting faster and faster but also better and better almost equaling Intels speed. But one thing I think AMD should focus on improving is their manufacturing processes quality. Intel still has the best manufacturing process.
    amd showed 28nm SRAM test wafers serveral months ago so i'm pretty sure that they can produce working ICs by now
    Core i7 2600k|HD 6950|8GB RipJawsX|2x 128gb Samsung SSD 830 Raid0|Asus Sabertooth P67
    Seasonic X-560|Corsair 650D|2x WD Red 3TB Raid1|WD Green 3TB|Asus Xonar Essence STX


    Core i3 2100|HD 7770|8GB RipJawsX|128gb Samsung SSD 830|Asrock Z77 Pro4-M
    Bequiet! E9 400W|Fractal Design Arc Mini|3x Hitachi 7k1000.C|Asus Xonar DX


    Dell Latitude E6410|Core i7 620m|8gb DDR3|WXGA+ Screen|Nvidia Quadro NVS3100
    256gb Samsung PB22-J|Intel Wireless 6300|Sierra Aircard MC8781|WD Scorpio Blue 1TB


    Harman Kardon HK1200|Vienna Acoustics Brandnew|AKG K240 Monitor 600ohm|Sony CDP 228ESD

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •