GPUs Public Preview: Run AI workloads on H100, A100, L40S, and more
Welcome to day two of Koyeb launch week. Today we're announcing not one, but two major pieces of news: Our lineup ranges from 20GB to 80GB of vRAM with A100 and H100 cards. You can now run high-precision calculations with FP64 instructions support and a gigantic 2TB/s of bandwidth on the H100. With prices ranging from $0.50/hr to $3.30/hr and always billed by the second, you'll be able to run training, fine-tuning, and inference workloads with a card adapted to your needs.