When running LLMs on Docker with an Apple M3 or M4 chip, they will operate in CPU mode regardless of the chip's class, as Docker only supports Nvidia and Radeon GPUs.
If you're developing LLMs on Docker, consider getting a Framework laptop with an Nvidia or Radeon GPU instead.
Source: I develop an AI agent framework that runs LLMs inside Docker on an M3 Max (https://kdeps.com).
When running LLMs on Docker with an Apple M3 or M4 chip, they will operate in CPU mode regardless of the chip's class, as Docker only supports Nvidia and Radeon GPUs.
If you're developing LLMs on Docker, consider getting a Framework laptop with an Nvidia or Radeon GPU instead.
Source: I develop an AI agent framework that runs LLMs inside Docker on an M3 Max (https://kdeps.com).