AMD to support Nvidia's CUDA technology?
Moderator: Site Moderators
-
- Posts: 50
- Joined: Tue Apr 14, 2009 6:51 am
- Hardware configuration: Intel Core i3 2100 3092.91 MHz (99.77 x 31.0)
- Location: indonesia
- Contact:
AMD to support Nvidia's CUDA technology?
look at this article http://www.techradar.com/news/computing ... gy--612041,
Folding@Home user since Feb 2009
-
- Posts: 120
- Joined: Fri Jan 25, 2008 3:20 am
- Hardware configuration: Q6600 | P35-DQ6 | Crucial 2 x 1 GB ram | VisionTek 3870
GPU2 Version 6.20| CPU three 6.20 Clients
Re: AMD to support Nvidia's CUDA technology?
Interesting article and equally interesting response from AMD.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: AMD to support Nvidia's CUDA technology?
IIRC, this idea has been around for about a year now. It may run that way, but unlikely that it will be able to take advantage of hardware specific features, and therefore run slower.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: AMD to support Nvidia's CUDA technology?
A bit of history:
Intel developed something called Katmai for their Pentimum III. Meanwhile, AMD developed 3DNow! and 3Dnow+ for the same purpose: To enable the CPU to process floating point operations in parallel. Not long after that, Intel developed a final version and called it SSE.
The two developments were initially developed to provide a proprietary advantage to sell their respective hardware. Eventually, AMD decided to pay Intel a licensing fee so that their hardware could be marketed as supporting the more widely adopted SSE.
If AMD eventually does support CUDA (and I know nothing more that what it says in the articles you are reading), I wonder what AMD will have to pay NV to license their proprietary technology. The real question isn't whether Brooke+ or CAL or CUDA or OpenCL is the "best" - - - it's all about competition and ultimately about money.
Intel developed something called Katmai for their Pentimum III. Meanwhile, AMD developed 3DNow! and 3Dnow+ for the same purpose: To enable the CPU to process floating point operations in parallel. Not long after that, Intel developed a final version and called it SSE.
The two developments were initially developed to provide a proprietary advantage to sell their respective hardware. Eventually, AMD decided to pay Intel a licensing fee so that their hardware could be marketed as supporting the more widely adopted SSE.
If AMD eventually does support CUDA (and I know nothing more that what it says in the articles you are reading), I wonder what AMD will have to pay NV to license their proprietary technology. The real question isn't whether Brooke+ or CAL or CUDA or OpenCL is the "best" - - - it's all about competition and ultimately about money.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 47
- Joined: Wed Dec 05, 2007 4:31 pm
- Hardware configuration: Dual Xeon E5645 (12C/24T) / 24Gb DDR3 - VMware ESXi 6.7.0
FAH v7.5.1 - Location: London, UK
Re: AMD to support Nvidia's CUDA technology?
@Bruce, of course you are correct ....... but as folders we want to see CUDA on all GPUs' as it would mean a more streamlined client and cores and I assume better performance on ATi cards.
Currently I get around 3,000 ppd on 4890 with 800 shaders and 5,000 ppd on GTX260 with 216 shaders - so question is would CUDA on 4890 = more ppd than a GTX260 ?
Currently I get around 3,000 ppd on 4890 with 800 shaders and 5,000 ppd on GTX260 with 216 shaders - so question is would CUDA on 4890 = more ppd than a GTX260 ?
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: AMD to support Nvidia's CUDA technology?
No. Cuda on ATI would actually run slower. The ability to run is far from running well, or well optimized for speed.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: AMD to support Nvidia's CUDA technology?
Not likely
http://www.theinquirer.net/inquirer/new ... a-run-gpus
http://www.theinquirer.net/inquirer/new ... a-run-gpus
AMD's Gary Silcott told the INQ "they [Nvidia] would intentionally damage performance to make Nvidia GPUs run the same app better." Then, perhaps thinking better of accusing Nvidia of hypothetical, yet outright, sabotage, Silcott added "Even if it wasn't intentional, it would not be optimized for our instruction set architecture like our own SDK."
Transparency and Accountability, the necessary foundation of any great endeavor!
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: AMD to support Nvidia's CUDA technology?
AMD's Gary Silcott told the INQ..."...it would not be optimized for our instruction set architecture like our own SDK."
Hmmm... sounds familiar.
Hmmm... sounds familiar.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 87
- Joined: Tue Jul 08, 2008 2:27 pm
- Hardware configuration: 1x Q6600 @ 3.2GHz, 4GB DDR3-1333
1x Phenom X4 9950 @ 2.6GHz, 4GB DDR2-1066
3x GeForce 9800GX2
1x GeForce 8800GT
CentOS 5 x86-64, WINE 1.x with CUDA wrappers
Re: AMD to support Nvidia's CUDA technology?
The problem is that nVidia and ATI architectures are fundamentally different. nVidia is MIMD, ATI is SIMD. In a nutshell:
MIMD:
Pros: Easier to write a decently optimizing compiler. Less dependent on programmer competence.
Cons: Requires more silicon to implement thus more expensive, worse performance per Watt.
SIMD:
Pros: Cheaper, better performance per Watt.
Cons: Writing a decent vectorizing compiler is difficult. So difficult that to the best of my knowledge only two such compilers exist, and only one is for x86 and easily available. Even with a good compiler, reasonable degree of programmer competence (rare and getting more so!) is required to work with the compiler to achieve optimal results.
Now, there is no reason why it shouldn't be possible to write a SIMD capable compiler for CUDA (CUDA essentially being C with type extensions), but this would effectively lead to ATI having to keep up with nVidia's language feature creep, which puts them at a disadvantage when they actually have superior hardware. And as I already pointed out, ATI would have a much harder job in keeping up than nVidia because vectorization is more difficult to do than parallelization. AMD's contribution to vectorizing compilers is based around GCC, and GCC's vectorization capabilities were embarrassingly poor.
This is one of the reasons I have relatively high expectations of Larabee. In terms of raw hardware, it probably won't be as capable as ATI's GPUs, but ICC may well make up the difference.
MIMD:
Pros: Easier to write a decently optimizing compiler. Less dependent on programmer competence.
Cons: Requires more silicon to implement thus more expensive, worse performance per Watt.
SIMD:
Pros: Cheaper, better performance per Watt.
Cons: Writing a decent vectorizing compiler is difficult. So difficult that to the best of my knowledge only two such compilers exist, and only one is for x86 and easily available. Even with a good compiler, reasonable degree of programmer competence (rare and getting more so!) is required to work with the compiler to achieve optimal results.
Now, there is no reason why it shouldn't be possible to write a SIMD capable compiler for CUDA (CUDA essentially being C with type extensions), but this would effectively lead to ATI having to keep up with nVidia's language feature creep, which puts them at a disadvantage when they actually have superior hardware. And as I already pointed out, ATI would have a much harder job in keeping up than nVidia because vectorization is more difficult to do than parallelization. AMD's contribution to vectorizing compilers is based around GCC, and GCC's vectorization capabilities were embarrassingly poor.
This is one of the reasons I have relatively high expectations of Larabee. In terms of raw hardware, it probably won't be as capable as ATI's GPUs, but ICC may well make up the difference.
-
- Posts: 47
- Joined: Wed Dec 05, 2007 4:31 pm
- Hardware configuration: Dual Xeon E5645 (12C/24T) / 24Gb DDR3 - VMware ESXi 6.7.0
FAH v7.5.1 - Location: London, UK
Re: AMD to support Nvidia's CUDA technology?
Yeah, I guess we are hoping for a bit too much ........ It still amazes me that after so long, the "superior" graphics hardware is running at a disadvantage in FAH.
Re: AMD to support Nvidia's CUDA technology?
I don't think that the word "disadvantage" is appropriate. CUDA is superior when you're programming NV hardware but it would not produce superior performance if it were adapted to ATI. The two hardware platforms are really rather different, and the two software implementations are also rather different. I'm not ready to declare that either one has an overall advantage over the other.Russ_64 wrote:Yeah, I guess we are hoping for a bit too much ........ It still amazes me that after so long, the "superior" graphics hardware is running at a disadvantage in FAH.
Things may change when (and if) OpenCL becomes a reliable alternative, but until then, the jury is still out.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 10
- Joined: Sun Sep 06, 2009 2:24 am
Re: AMD to support Nvidia's CUDA technology?
ATi has CAL and brook+. CUDA is for nvidia.
-
- Posts: 1024
- Joined: Sun Dec 02, 2007 12:43 pm
Re: AMD to support Nvidia's CUDA technology?
From what I read, OpenCL might replace both CAL and CUDA.