Development has stated that once CUDA JIT Compiler will be released, it may result in a CUDA FahCore for Nvidia GPUs. Thus, it is up to the developers to use this feature or not.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Unified Memory is a good idea, but it's really quite different than the present memory model. Have you considered the implications for FAH if it's adopted quickly?
One of the benefits mentioned in the article is "simplified programming" but have you considered what that means? Suppose FAH adopted that simplified programming model and suddenly all the older GPUs which require the dual memory model stopped working or their performance was severely diminished? FAH is not in a position to write code that is not backward compatible, at least until the older model represents the hardware of a sufficiently small number of donors who may be forced to upgrade.
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Fah is not memory limited in performance, so this is not likely a high priority.
And as PX started to say, the GPU uses OpenCL now, not CUDA. But if there is ever a big performance gain in using CUDA, they will use it again. I rate this one as "not soon."