great analysis - and I'd agree, they are just making the best move at this time, but I'd be surprised if this was planned back when they announced Gemini
If they had FF high end desktop soon, they wouldn't bother with this card so that's the tragic part, nothing fun for a while from AMD. Parts at this price have no real relevance anyway they can do whatever. As for Oculus, their product and policies keep getting worse so AMD would be better off exploring more viable alternatives. On the other hand , Oculus should stay away from this card too since it would undermine sales by suggesting one has to spend and absurd amount of money for quality VR. As for your reasoning, it collides a bit with what Oculus claims about crippling games to work well on recommended hardware - unless they got wiser since .
NVIDIA's SLI is actually a "backcronym" - they got the term & "mindshare" with the acquisition of 3dfx, but the implementation worked differently, so SLI was "rechristened" Scalable Link Interface.
Sure, but you're not a print magazine with distribution only to the US; you're a website with global reach (and localised advertising) and where you're based is therefore almost immaterial.
It's nothing to get bent out of shape over obviously, I just like poking fun at northern hemisphere publications (which, to be fair, are almost always US American) referring to release dates in terms of seasons rather than something unambiguous like simply using a calendar.
Your technical analysis and engagement with your readers is appreciated.
Delays and yield problems to the... whatever! Wonder how long it's even going to be solid for considering 14nm/16nm GPUs for both AMD and Nvidia might be out around Computex in June.
Nobody ever made money in the computer business by intentionally delaying anything, time to market is almost everything. In 2015 a dual Fury would have been impressive, in 2016 it will be old school compared to the 14/16nm chips we're expecting like AMD Arctic Islands and nVidia Pascal. The opportunity came and went but like most, it passed AMD by....
Too bad for the uneducated buyer getting Gemini an entire 3 months before next gen. I'm sure there is an idiot out there with money but who would you seriously recommend this card for? 16nm Gemini or DOA
I will be very surprised if Gemini is actually ever delivered. End of 2015 was already pushing it, Q1 2016 means it's going to be arriving on outdated 28nm tech just as 14nm Pascal makes its debut in Q2. Assuming no delays on nVidia's part, and that they don't release a lemon, that gives Gemini 3 months in the spotlight, tops. That's just not enough for an expensive dual-GPU card.
Re the article, I don't think it's fair to include Arkham Knight in a list of "Fall 2015 New Games". That game's been out since early 2015 and is so horribly broken I get the feeling both nVidia and AMD have washed their hands of it. As for Fallout 4 and MGS V, they're locked to 60FPS because they're console ports.
On the flipside, multi-GPU VR has to be designed in during game engine development, optimised by the game developer per game (to avoid introducing extra latency rather than reducing it), and mainly accelerates pixel-based operations rather than geometry and lighting operations. The 'per eye' resolution of upcoming consumer headsets (1080x1200 for both the Vive and Rift CV1) are relatively low. You can 'push up' pixel operations by using SSAA rather than MSAA (MSAA is effectively a requirement for VR to prevent highly visible aliasing), but that's more using brute force just as an excuse for overspeccing.
Worse, the design requirements for VR (very fine up-close detail, 'cheats' like normal mapping not working well in stereo) put emphasis on geometric detail, something that multi-GPU VRm does not effectively accelerate (you need to duplicate most or all of that work, or you're stuck wasting precious latency talking over the PCIe bus). You also have a fixed (albeit small) latency overhead due to having to copy the buffer from one card to another before compositing the two buffers for output.
VR multi-GPU has the same or worse issues as currently exist with multi-GPU compatibility, and even in the optimum case can't even approach perfect scaling.
Wtf is RTG? Can we just call it AMD instead of the acronym nobody knows for the stupid name they gave an internal division? It's confusing and pointless.
I know exactly what it stands for, but the point is that its a stupid new acronym that AMD have created to separate their GPU division from the lifeless husk of their CPU division to prepare for the inevitable spin-off. Even so, the AMD brand (or ATI) have a lot more meaning to me than "RTG"
However a company wishes to brand itself, it takes time for us to absorb the new name. Out of respect for the about to be spun off RTG I understand using their preferred name. But for the sake of the readers please reference AMD/RTG as you did in the past during the ATI/AMD change. It is confusing to translate
AMD still can't get a product to market on time, it seems. A dual fury had potential, but q1 2016 is too close to the 16/14nm GPUs. It will look like antiquated junk compared to the newest stuff.
Let's hope AMD doesn't do the same stuff with zen and the r400 series. they really need to get their act together.
In light of the cost of high end GPUs (worse when you're talking about multi GPU setups) and the ever-present problems that have plagued such setups over the years AND combine that with DX12's shuffling of responsibility of baking in such support onto the shoulders of highly cost-sensitive game developers, I just don't see much market appeal for any multi GPU scenario outside of heavy-lifting rendering. VR isn't going to do it. As much wow-factor as the technology promises, there's a lot of reasons for consumers to avoid it that will act as barriers preventing companies like Facebook from selling large enough numbers of Rift headsets to appeal to those same cost-sensitive developers. In order to land adequate sales, games need to run well on 2+ year old Intel graphics and have a port for phones and tablets or devs are shutting themselves out of the largest chunk of their potential market. While I know Angry Birds isn't exactly a AAA title, games of that ilk prove that there's a huge potential and titles that require high end and expensive GPU arrays to run well won't exploit that potential, leaving a lot of profit on the table. It begs the question why anyone is investing money in developing things like Gemini.
Regarding finFET gpus, is there any reason we couldn't get something like a monster Hawaii chip but just die-shrunk? Surely even without arch improvements the power savings would be worth it?
Cost? Pointlessness? Cost? Of course that's assuming AMD has a lineup in development based on a new architecture with feature and efficiency improvements. Otherwise sure, just die shrink your existing GPU's, although if that were to happen it wouldn't bode well for AMD's financial situation. Of course it's cheaper to simply die shrink existing GPU's, but assuming AMD still has the budget to iterate on their architecture, it just wouldn't make sense to devote resources to a die shrink unless the new architecture is either really far out, or there just aren't enough resources to bring GPU's based on the new architecture to market at every price tier (unfortunately AMD's already been forced to do this).
I don't understand the emphasis on AFR. Sure it doubles frame rate but it does nothing for latency. I preferred the idea of splitting the screen in two and putting half on each GPU. Or maybe rendering the same screen twice with a tiny offset to give SSAA.
Agreed. SFR seems the obvious solution, and most of the problems with differing workloads for each card can be avoided by splitting the frame down the middle horizontally rather than vertically. Two cards each working on 1936x2160 has to be a lot faster than one card doing the full 3840x2160 of a 4k display (I'm aware 1920x2160 would be sufficient but there might be artifacts where the two edges join, so I added an extra 16 pixels overlap in the rendering, which is then discarded).
Using two GPUs for SSAA is costly, but is definitely a good use of the extra power if the framerate doesn't require or can't be improved. I always use it with older games provided it doesn't have unwanted side-effects such as blurred overlays such as text.
As a dual-GPU user myself, I feel obligated to voice an opinion that is opposite to Ryan's statement about the 60FPS lock taking away the advantage of multi-GPUs.
I'm not a professional gamer or even a competitive one. I couldn't care less for refresh rates over 60Hz.
I own two Radeon 290X because I want to experience every game as maxed out as possible at 1440p and solid 60 FPS. This means using high antialiasing samples, adaptive multisampling, alpha-to-coverage where applicable and even supersampling or virtual resolutions.
I can't have that with any single card at the moment, but I can have with two R9 290X which cost less than the price of a single 980 Ti.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
43 Comments
Back to Article
klatscho - Wednesday, December 23, 2015 - link
great analysis - and I'd agree, they are just making the best move at this time, but I'd be surprised if this was planned back when they announced Geminijjj - Wednesday, December 23, 2015 - link
If they had FF high end desktop soon, they wouldn't bother with this card so that's the tragic part, nothing fun for a while from AMD. Parts at this price have no real relevance anyway they can do whatever.As for Oculus, their product and policies keep getting worse so AMD would be better off exploring more viable alternatives. On the other hand , Oculus should stay away from this card too since it would undermine sales by suggesting one has to spend and absurd amount of money for quality VR.
As for your reasoning, it collides a bit with what Oculus claims about crippling games to work well on recommended hardware - unless they got wiser since .
extide - Wednesday, December 23, 2015 - link
I think they are using it more a developers card for VR -- consumers will be pointed to the new 16FF cards later.boozed - Wednesday, December 23, 2015 - link
"With winter upon us"It's summer here, mate.
Ryan Smith - Wednesday, December 23, 2015 - link
AnandTech is headquartered in North America, buddy.Qasar - Wednesday, December 23, 2015 - link
too bad nvidia and RTG cant go back to the 3DFX way of multi gpu rendering .......Syran - Wednesday, December 23, 2015 - link
IIRC,3DFX's SLI was AFR.Spoelie - Wednesday, December 23, 2015 - link
Actually, no. SLI is short for Scan Line Interleave, where each card was rendering alternate lines of the same frame.The rendering pipeline was much more rigid back then, so this wasn't an issue for the games at the time.
Spoelie - Wednesday, December 23, 2015 - link
NVIDIA's SLI is actually a "backcronym" - they got the term & "mindshare" with the acquisition of 3dfx, but the implementation worked differently, so SLI was "rechristened" Scalable Link Interface.AndrewJacksonZA - Wednesday, December 23, 2015 - link
And you have sweating-because-of-the-heat loyal readers all over the world, boet. :-)boozed - Wednesday, December 23, 2015 - link
Sure, but you're not a print magazine with distribution only to the US; you're a website with global reach (and localised advertising) and where you're based is therefore almost immaterial.It's nothing to get bent out of shape over obviously, I just like poking fun at northern hemisphere publications (which, to be fair, are almost always US American) referring to release dates in terms of seasons rather than something unambiguous like simply using a calendar.
Your technical analysis and engagement with your readers is appreciated.
Ryan Smith - Wednesday, December 23, 2015 - link
"It's nothing to get bent out of shape over obviously,"Oh to be clear I'm not. Buddy was just the most comparable thing I could come up with for "mate".=P
boozed - Wednesday, December 23, 2015 - link
Whoops. I actually meant that I wasn't taking it too seriously! Cheers.Yojimbo - Wednesday, December 23, 2015 - link
You must be so triggered, mate.Samus - Thursday, December 24, 2015 - link
He's not your buddy, guy.doggface - Sunday, December 27, 2015 - link
Haha. kudos for SP ref.Frenetic Pony - Wednesday, December 23, 2015 - link
Delays and yield problems to the... whatever! Wonder how long it's even going to be solid for considering 14nm/16nm GPUs for both AMD and Nvidia might be out around Computex in June.Kjella - Wednesday, December 23, 2015 - link
Nobody ever made money in the computer business by intentionally delaying anything, time to market is almost everything. In 2015 a dual Fury would have been impressive, in 2016 it will be old school compared to the 14/16nm chips we're expecting like AMD Arctic Islands and nVidia Pascal. The opportunity came and went but like most, it passed AMD by....jasonelmore - Saturday, December 26, 2015 - link
those 16nm gpu's wont be out till late june, early july.. VR comes out march.hero4hire - Saturday, January 9, 2016 - link
Too bad for the uneducated buyer getting Gemini an entire 3 months before next gen. I'm sure there is an idiot out there with money but who would you seriously recommend this card for? 16nm Gemini or DOAThe_Assimilator - Wednesday, December 23, 2015 - link
I will be very surprised if Gemini is actually ever delivered. End of 2015 was already pushing it, Q1 2016 means it's going to be arriving on outdated 28nm tech just as 14nm Pascal makes its debut in Q2. Assuming no delays on nVidia's part, and that they don't release a lemon, that gives Gemini 3 months in the spotlight, tops. That's just not enough for an expensive dual-GPU card.Re the article, I don't think it's fair to include Arkham Knight in a list of "Fall 2015 New Games". That game's been out since early 2015 and is so horribly broken I get the feeling both nVidia and AMD have washed their hands of it. As for Fallout 4 and MGS V, they're locked to 60FPS because they're console ports.
The_Assimilator - Wednesday, December 23, 2015 - link
Ugh, 16nm Pascal.Vayra - Thursday, December 24, 2015 - link
No, Fallout 4 is not FPS locked at all, the engine just doesn't like frame variance too much.edzieba - Wednesday, December 23, 2015 - link
On the flipside, multi-GPU VR has to be designed in during game engine development, optimised by the game developer per game (to avoid introducing extra latency rather than reducing it), and mainly accelerates pixel-based operations rather than geometry and lighting operations. The 'per eye' resolution of upcoming consumer headsets (1080x1200 for both the Vive and Rift CV1) are relatively low. You can 'push up' pixel operations by using SSAA rather than MSAA (MSAA is effectively a requirement for VR to prevent highly visible aliasing), but that's more using brute force just as an excuse for overspeccing.Worse, the design requirements for VR (very fine up-close detail, 'cheats' like normal mapping not working well in stereo) put emphasis on geometric detail, something that multi-GPU VRm does not effectively accelerate (you need to duplicate most or all of that work, or you're stuck wasting precious latency talking over the PCIe bus). You also have a fixed (albeit small) latency overhead due to having to copy the buffer from one card to another before compositing the two buffers for output.
VR multi-GPU has the same or worse issues as currently exist with multi-GPU compatibility, and even in the optimum case can't even approach perfect scaling.
owan - Wednesday, December 23, 2015 - link
Wtf is RTG? Can we just call it AMD instead of the acronym nobody knows for the stupid name they gave an internal division? It's confusing and pointless.testbug00 - Thursday, December 24, 2015 - link
Radeon Technologies Group.It is the "internal division" which now runs ALL of the graphics stuff. Think of it as a sub-company. Calling it RTG is completely fine.
owan - Monday, January 4, 2016 - link
I know exactly what it stands for, but the point is that its a stupid new acronym that AMD have created to separate their GPU division from the lifeless husk of their CPU division to prepare for the inevitable spin-off. Even so, the AMD brand (or ATI) have a lot more meaning to me than "RTG"jasonelmore - Saturday, December 26, 2015 - link
Dollars to Doughnuts AMD PR is Hammering "RTG" into all tech writers heads.. they are probably requiring it or else they get no samples.Ryan Smith - Tuesday, December 29, 2015 - link
When I asked the RTG what they would prefer we use, they said that they'd prefer we use their name as opposed to AMD.The fact of the matter is that there's a distinct lack of "AMD" in the latest RTG slide decks, and that's intentional on their part.
hero4hire - Saturday, January 9, 2016 - link
However a company wishes to brand itself, it takes time for us to absorb the new name. Out of respect for the about to be spun off RTG I understand using their preferred name. But for the sake of the readers please reference AMD/RTG as you did in the past during the ATI/AMD change. It is confusing to translateMastermindX - Monday, December 28, 2015 - link
Are you sure you wouldn't prefer we'd used ATI instead?TheinsanegamerN - Wednesday, December 23, 2015 - link
AMD still can't get a product to market on time, it seems. A dual fury had potential, but q1 2016 is too close to the 16/14nm GPUs. It will look like antiquated junk compared to the newest stuff.Let's hope AMD doesn't do the same stuff with zen and the r400 series. they really need to get their act together.
Vayra - Thursday, December 24, 2015 - link
As long as DX12 based games are still in mid-air it really doesn't matter. We are still looking at old junk across the board, let's face it.BrokenCrayons - Wednesday, December 23, 2015 - link
In light of the cost of high end GPUs (worse when you're talking about multi GPU setups) and the ever-present problems that have plagued such setups over the years AND combine that with DX12's shuffling of responsibility of baking in such support onto the shoulders of highly cost-sensitive game developers, I just don't see much market appeal for any multi GPU scenario outside of heavy-lifting rendering. VR isn't going to do it. As much wow-factor as the technology promises, there's a lot of reasons for consumers to avoid it that will act as barriers preventing companies like Facebook from selling large enough numbers of Rift headsets to appeal to those same cost-sensitive developers. In order to land adequate sales, games need to run well on 2+ year old Intel graphics and have a port for phones and tablets or devs are shutting themselves out of the largest chunk of their potential market. While I know Angry Birds isn't exactly a AAA title, games of that ilk prove that there's a huge potential and titles that require high end and expensive GPU arrays to run well won't exploit that potential, leaving a lot of profit on the table. It begs the question why anyone is investing money in developing things like Gemini.jimjamjamie - Wednesday, December 23, 2015 - link
Regarding finFET gpus, is there any reason we couldn't get something like a monster Hawaii chip but just die-shrunk? Surely even without arch improvements the power savings would be worth it?dragonsqrrl - Wednesday, December 23, 2015 - link
Cost? Pointlessness? Cost? Of course that's assuming AMD has a lineup in development based on a new architecture with feature and efficiency improvements. Otherwise sure, just die shrink your existing GPU's, although if that were to happen it wouldn't bode well for AMD's financial situation. Of course it's cheaper to simply die shrink existing GPU's, but assuming AMD still has the budget to iterate on their architecture, it just wouldn't make sense to devote resources to a die shrink unless the new architecture is either really far out, or there just aren't enough resources to bring GPU's based on the new architecture to market at every price tier (unfortunately AMD's already been forced to do this).Vayra - Thursday, December 24, 2015 - link
Hawaii isn't that monstrous since Maxwell popped up. R9 290(x) is close to midrange these days.jasonelmore - Saturday, December 26, 2015 - link
you better believe AMD will do this, and make 3 different product lines with that same GPU die, and span it out over 3 years.AMD over milks every gpu die they have ever made.
stephenbrooks - Wednesday, December 23, 2015 - link
I don't understand the emphasis on AFR. Sure it doubles frame rate but it does nothing for latency. I preferred the idea of splitting the screen in two and putting half on each GPU. Or maybe rendering the same screen twice with a tiny offset to give SSAA.PrinceGaz - Thursday, December 24, 2015 - link
Agreed. SFR seems the obvious solution, and most of the problems with differing workloads for each card can be avoided by splitting the frame down the middle horizontally rather than vertically. Two cards each working on 1936x2160 has to be a lot faster than one card doing the full 3840x2160 of a 4k display (I'm aware 1920x2160 would be sufficient but there might be artifacts where the two edges join, so I added an extra 16 pixels overlap in the rendering, which is then discarded).Using two GPUs for SSAA is costly, but is definitely a good use of the extra power if the framerate doesn't require or can't be improved. I always use it with older games provided it doesn't have unwanted side-effects such as blurred overlays such as text.
ToTTenTranz - Wednesday, December 23, 2015 - link
As a dual-GPU user myself, I feel obligated to voice an opinion that is opposite to Ryan's statement about the 60FPS lock taking away the advantage of multi-GPUs.I'm not a professional gamer or even a competitive one. I couldn't care less for refresh rates over 60Hz.
I own two Radeon 290X because I want to experience every game as maxed out as possible at 1440p and solid 60 FPS. This means using high antialiasing samples, adaptive multisampling, alpha-to-coverage where applicable and even supersampling or virtual resolutions.
I can't have that with any single card at the moment, but I can have with two R9 290X which cost less than the price of a single 980 Ti.
will1956 - Thursday, December 24, 2015 - link
cheers, you just convinced me to get a single 980 Ti rather than twin 970 SLI, for 1080p gaming and a Oculus Rift head set next yearjasonelmore - Saturday, December 26, 2015 - link
Just say AMD man.. took me 10 minutes to figure out what RTG stood for.