Apple M1, M2, M3, M4

Post requests to add new GPUs to the official whitelist here.

Moderators: Site Moderators, FAHC Science Team

Artoria2e5
Posts: 7
Joined: Mon Dec 15, 2025 2:48 pm

Apple M1, M2, M3, M4

Post by Artoria2e5 »

OpenMM has had support for using OpenCL with Apple M1 since circa 2022. It would make sense for people currently crunching on their Apple Silicon machines to also use the GPU part of the chip.

While OpenCL deprecation is a concern, the API still works. According to https://github.com/openmm/openmm/issues/2489 the OpenMM people currently sees no reason to add a metal platform to the main codebase so long as OpenCL is working.

There is an unofficial metal platform plugin which runs faster, but it's unmaintained and is (again) not necessary for OpenMM to work on macOS. So my recommendation is to just ship ARM64+OpenCL versions of these cores.

FWIW, my clinfo:

Code: Select all

Number of platforms                               1
  Platform Name                                   Apple
  Platform Vendor                                 Apple
  Platform Version                                OpenCL 1.2 (Dec 13 2024 23:09:21)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_APPLE_SetMemObjectDestructor cl_APPLE_ContextLoggingFunctions cl_APPLE_clut cl_APPLE_query_kernel_names cl_APPLE_gl_sharing cl_khr_gl_event

  Platform Name                                   Apple
Number of devices                                 1
  Device Name                                     Apple M4
  Device Vendor                                   Apple
  Device Vendor ID                                0x1027f00
  Device Version                                  OpenCL 1.2
  Driver Version                                  1.2 1.0
  Device OpenCL C Version                         OpenCL C 1.2
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               10
  Max clock frequency                             1000MHz
  Device Partition                                (core)
    Max number of sub-devices                     0
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             256x256x256
  Max work group size                             256
  Preferred work group size multiple (kernel)     32
  Preferred / native vector sizes
    char                                                 1 / 1
    short                                                1 / 1
    int                                                  1 / 1
    long                                                 1 / 1
    half                                                 0 / 0        (n/a)
    float                                                1 / 1
    double                                               1 / 1        (n/a)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (n/a)
  Address bits                                    64, Little-Endian
  Global memory size                              11453251584 (10.67GiB)
  Error Correction support                        No
  Max memory allocation                           2147483648 (2GiB)
  Unified memory for Host and Device              Yes
  Minimum alignment for any data type             1 bytes
  Alignment of base address                       32768 bits (4096 bytes)
  Global Memory cache type                        None
  Image support                                   Yes
    Max number of samplers per kernel             32
    Max size for 1D images from buffer            268435456 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   256 bytes
    Pitch alignment for 2D image buffers          256 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 128
    Max number of write image args                8
  Local memory type                               Local
  Local memory size                               32768 (32KiB)
  Max number of constant args                     31
  Max constant buffer size                        1073741824 (1024MiB)
  Max size of kernel argument                     4096 (4KiB)
  Queue properties
    Out-of-order execution                        No
    Profiling                                     Yes
  Prefer user sync for interop                    Yes
  Profiling timer resolution                      1000ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            No
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_APPLE_SetMemObjectDestructor cl_APPLE_ContextLoggingFunctions cl_APPLE_clut cl_APPLE_query_kernel_names cl_APPLE_gl_sharing cl_khr_gl_event cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_3d_image_writes cl_khr_image2d_from_buffer cl_khr_depth_images

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  Apple
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [P0]
  clCreateContext(NULL, ...) [default]            Success [P0]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 Apple
    Device Name                                   Apple M4
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 Apple
    Device Name                                   Apple M4
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 Apple
    Device Name                                   Apple M4
Joe_H
Site Admin
Posts: 8289
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Studio M1 Max 32 GB smp6
Mac Hack i7-7700K 48 GB smp4
Location: W. MA

Re: Apple M1, M2, M3, M4

Post by Joe_H »

Yes, OpenCL is available, but so far there have been no plans to develop a GPU folding core for the macOS to do GPU folding. Among other issues besides concerns over the deprecated status of OpenCL is that the current methods used by the folding client to detect a GPU do not work. The GPU section of the M-series processors do not show up as a PCI device. So to support macOS GPU folding non the M-series chips would require writing entirely new code for the F@h client.

One other concern I see from the clinfo report is that it lists no support for Double-precision Floating-point. Currently all GPU folding cores for F@H enable that and use it for some critical calculations during WU processing. I don't know if that is an actual limitation of the M-series GPU sections or a failing of clinfo.
Image
calxalot
Site Moderator
Posts: 1743
Joined: Sat Dec 08, 2007 1:33 am
Location: San Francisco, CA
Contact:

Re: Apple M1, M2, M3, M4

Post by calxalot »

This might become a good use case for the newer gromacs core being developed, if it doesn’t require the fp64 to be done by the gpu when doing hybrid cpu+GPU work. Would need lots of testing, of course.

I was told that providing engineering support to open source projects like OpenMM requires department vice president or higher approval level.
muziqaz
Posts: 2299
Joined: Sun Dec 16, 2007 6:22 pm
Hardware configuration: 9950x, 9950x3d, 7950x3d, 5950x, 5800x3d
7900xtx, RX9070, Radeon 7, 5700xt, 6900xt, RX550, Intel B580
Location: London
Contact:

Re: Apple M1, M2, M3, M4

Post by muziqaz »

I think by now we established that GPUs on gromacs is not gonna happen in FAH :)
CUDA is unstable as hell
Opencl is supported only on 10+ year old AMD GPUs
HIP requires sycl, and that is still experimental and... yeah nah :D
FAH Omega tester
Image
calxalot
Site Moderator
Posts: 1743
Joined: Sat Dec 08, 2007 1:33 am
Location: San Francisco, CA
Contact:

Re: Apple M1, M2, M3, M4

Post by calxalot »

It has never been tried on Apple silicon, which is not any of those things.
muziqaz
Posts: 2299
Joined: Sun Dec 16, 2007 6:22 pm
Hardware configuration: 9950x, 9950x3d, 7950x3d, 5950x, 5800x3d
7900xtx, RX9070, Radeon 7, 5700xt, 6900xt, RX550, Intel B580
Location: London
Contact:

Re: Apple M1, M2, M3, M4

Post by muziqaz »

calxalot wrote: Wed Dec 17, 2025 10:51 pm It has never been tried on Apple silicon, which is not any of those things.
You have a chance to try :P
Have you?
FAH Omega tester
Image
Joe_H
Site Admin
Posts: 8289
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Studio M1 Max 32 GB smp6
Mac Hack i7-7700K 48 GB smp4
Location: W. MA

Re: Apple M1, M2, M3, M4

Post by Joe_H »

I vaguely remember reading that someone had run a simulation using OpenMM directly on one of the Apple Silicon Macs after they were introduced and was able to get results. It was some time back, so I don't recall any details.

OpenMM does have the capability to have FP64 calculations done on the CPU when a GPU doesn't support them. They didn't use that for the F@h GPU Cores as unless they had a separate core for GPUs without FP64 using that option would slow down calculations for all GPUs with FP64 support. Having separate cores with and without FP64 diverted to the CPU would have required major changes to the client and server code being used at the time.
Image
Artoria2e5
Posts: 7
Joined: Mon Dec 15, 2025 2:48 pm

Re: Apple M1, M2, M3, M4

Post by Artoria2e5 »

The Apple Silicon things indeed do not have fp64. In fact the metal shading language doesn't have that double data type at all.

I think it would be very reasonable to compile a version of OpenMM with CPU-resident fp64 for macOS arm64 since the vast majority of GPUs attached to such a platform would be Apple Silicon (you *could* attach an AMD GPU to a mac studio, or even do some thunderbolt magic, but that's probably less than 0.1% of potential folders).

RE: detection.

Currently the PCI dependence is caused purely by FAH client code as cbang ComputeDevice appears capable of dealing with devices without PCI. Basically the way GPUResources is written assumes that every acceleration device worth using has "cd.isPCIValid()" true. It then uses the PCI information in two ways:

* one is to use getPCIID(), i.e. bus location, to dedupe between OpenCL, CUDA, & HIP
* the other is to use PCI Vendor ID + Device ID to check gpu.json (gpuIndex) support. gpuIndex is implemented in cbang

For the first use I can offer no useful replacement. Luckily non-PCI OpenCL devices usually don't have CUDA or HIP anyways, between Apple GPUs and weirder accelerators.

For the second use the best I can offer is OpenCL vendor ID + model string. This could be a cbang feature request for gpuIndex.
Last edited by Artoria2e5 on Thu Dec 18, 2025 5:51 am, edited 2 times in total.
Joe_H
Site Admin
Posts: 8289
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Studio M1 Max 32 GB smp6
Mac Hack i7-7700K 48 GB smp4
Location: W. MA

Re: Apple M1, M2, M3, M4

Post by Joe_H »

Artoria2e5 wrote: Thu Dec 18, 2025 5:48 am I think it would be very reasonable to compile a version of OpenMM with CPU-resident fp64 for macOS arm64 since the vast majority of GPUs attached to such a platform would be Apple Silicon (you *could* attach an AMD GPU to a mac studio, or even do some thunderbolt magic, but that's probably less than 0.1% of potential folders).
It might or might not be reasonable to do so. But that presupposes that they client and server code was ready to use that folding core. As for attaching an AMD GPU, well that is just not going to happen anytime soon. There currently is no support in the macOS to attach an external GPU through Thunderbolt on any of the M-series Macs. That ended with the last of the Intel based Macs. And as far as I understand it, Apple has not even included the necessary hooks to install a driver for an AMD GPU on the current Apple Silicon Macs even if you could recompile a driver from the open source drivers.

You are mostly handwaving about detection, that would take a rewrite of the code and testing to make certain it would work. Not something on the one full time developers schedule from the persons directing the F@h project. Currently the v8.5.5 public Beta is out, that is a release candidate if no major problems are found. After that the developer will be working on something else, not the client or server code for a while. There are some volunteer developers accepted to work on the client code, but GPU folding cores are developed by some of the researchers.
Image
muziqaz
Posts: 2299
Joined: Sun Dec 16, 2007 6:22 pm
Hardware configuration: 9950x, 9950x3d, 7950x3d, 5950x, 5800x3d
7900xtx, RX9070, Radeon 7, 5700xt, 6900xt, RX550, Intel B580
Location: London
Contact:

Re: Apple M1, M2, M3, M4

Post by muziqaz »

Besides, what are potential performance returns? Couple of opencl powered AMD GPUs? We already have orders of magnitude of such device from windows/Linux users, and they are not even scratching the surface of Nvidia user numbers. So Apple AMD users wouldn't even scratch the surface of x86 AMD user numbers. Hardly worth effort and time to develop something for that.
Or if you want M chip igpus to fold, again overall performance addition won't even register on the overall scale of FAH. That's like writing your modern app in assembly just for the heck of it. Again, would be waste of developers time for such small performance return.
We have been waiting for 2 years for HIP fahcore for AMD, which will give a massive boost in overall FAH performance throughout.
In ideal world we would love to have support for every single device in the world, but time constraints and priorities are the reality :(
FAH Omega tester
Image
calxalot
Site Moderator
Posts: 1743
Joined: Sat Dec 08, 2007 1:33 am
Location: San Francisco, CA
Contact:

Re: Apple M1, M2, M3, M4

Post by calxalot »

All Apple silicon has a decent iGPU. There is no need to detect presence, only to classify by M-series generation (for capabilities), and number of GPU cores for reasonable WU assignment (because # cores could be 7 to 80).

Some scheme for gpus.json would have to be dreamed up.
There is a vendor id, just no PCI device id.
calxalot
Site Moderator
Posts: 1743
Joined: Sat Dec 08, 2007 1:33 am
Location: San Francisco, CA
Contact:

Re: Apple M1, M2, M3, M4

Post by calxalot »

The cores also use cbang, so there does need to be some detection.
And another ComputeDevice type.

Artoria2e5 has read the code and is not hand waving.
Artoria2e5
Posts: 7
Joined: Mon Dec 15, 2025 2:48 pm

Re: Apple M1, M2, M3, M4

Post by Artoria2e5 »

This thread is about introducing Apple iGPUs, not AMD GPUs on macOS, and I intend to keep it that way. If raspberry pis have a role in FAH, then iGPUs with a similar power consumption and vastly more FLOPS definitely do. Especially when upstream OpenMM has been tested against the iGPU in question.

That said there's very little I can do with them cores being managed by researchers themselves. Welp. Most of I've said here very applies to HIP too, except for the runtime needing to be manually installed on Windows I guess.

In other news, GROMACS now has a native HIP (not going through SYCL) backend, though it's just the NBNxM kernels: https://manual.gromacs.org/2025.1/relea ... ility.html. From the official side they still recommend using AdaptiveCpp though. Really, what's "experimental" about using SYCL again now that it's the recommended backend?

* * *
It would be easiest to make a separate index, if you don't mind littering the data directory. Don't use this name (it's terrible) but it's something along the lines of:

Code: Select all

using namespace cb;

class NonPciXPU: public JSON::Serializable {
  uint32_t vendorID;  // OpenCL has 32-bit IDs. Apple uses a vendor id > 0xffff!
  std::string model;
  uint16_t type;
  uint16_t species;
// Adapt initializer & methods from GPU accordingly.
}

// well, strings are comparable too
bool NonPciXPU::operator<(const NonPciXPU &xpu) const {
  if (vendorID != xpu.vendorID) return vendorID < xpu.vendorID;
  return model < gpu.model;
}

class NonPciXPUIndex : public JSON::Serializable {
  typedef std::set<NonPciXPU> xpus_t;
  xpus_t xpus;
// Adapt initializer & methods from GPUIndex accordingly.
}
With the corresponding JSON structure being:

Code: Select all

[
  {"vendor": 0, "model": "[NOTE Editing the local copy of this file will lead to a FAH error.], "type": 0, "species": 0},
  {"vendor": 16940800, "model": "Apple M3", "type": 4, "species": 3},
  {"vendor": 16940800, "model": "Apple M4", "type": 4, "species": 4}
]
* * *
calxalot
Site Moderator
Posts: 1743
Joined: Sat Dec 08, 2007 1:33 am
Location: San Francisco, CA
Contact:

Re: Apple M1, M2, M3, M4

Post by calxalot »

If we don’t care about exact core counts, model could be the full cpu name. E.g., “Apple M4 Max”.
Joe_H
Site Admin
Posts: 8289
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Studio M1 Max 32 GB smp6
Mac Hack i7-7700K 48 GB smp4
Location: W. MA

Re: Apple M1, M2, M3, M4

Post by Joe_H »

calxalot wrote: Thu Dec 18, 2025 8:40 am The cores also use cbang, so there does need to be some detection.
And another ComputeDevice type.

Artoria2e5 has read the code and is not hand waving.
I consider it hand waving of the sort that usually comes with the statement "It's just a simple matter of programming". The change cascades at least into the client and server code and would need to be done in a way that doesn't break existing support for GPUs on Windows and Linux.
Post Reply