From 3540e10945d9f77472bce6fc8794eed97a02391e Mon Sep 17 00:00:00 2001 From: Tellsanguis Date: Sat, 22 Nov 2025 23:59:27 +0100 Subject: [PATCH] =?UTF-8?q?Correction=20pr=C3=A9requis=20n=C5=93uds=20Ceph?= =?UTF-8?q?=20:=203=20minimum,=205=20optimal?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add reference to Ceph benchmark article showing scale-out benefits --- blog/2025-11-22-stockage-distribue-proxmox.md | 4 ++-- .../2025-11-22-stockage-distribue-proxmox.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/blog/2025-11-22-stockage-distribue-proxmox.md b/blog/2025-11-22-stockage-distribue-proxmox.md index d8fe001..b9d0dff 100644 --- a/blog/2025-11-22-stockage-distribue-proxmox.md +++ b/blog/2025-11-22-stockage-distribue-proxmox.md @@ -58,7 +58,7 @@ Ceph est la solution de stockage distribué la plus mise en avant par Proxmox. E **Inconvénients :** - **Consommation importante de ressources** : CPU et RAM significatifs pour les processus MON, MGR et OSD -- **Recommandation officielle de 5 nœuds minimum** pour une configuration optimale (3 MON + répartition des OSD) +- **3 nœuds minimum, mais [5 recommandés](https://ceph.io/en/news/blog/2019/part-3-rhcs-bluestore-performance-scalability-3-vs-5-nodes/)** pour des performances optimales (scalabilité quasi-linéaire grâce à l'architecture scale-out de Ceph) - **Nécessite un réseau 10 Gbps** pour des performances acceptables - Complexité opérationnelle élevée malgré la simplification apportée par Proxmox @@ -107,7 +107,7 @@ Au regard des contraintes de mon homelab, Linstor DRBD m'a semblé être le choi | Critère | Ceph | ZFS Réplication | Linstor DRBD | |---------|------|-----------------|--------------| -| Nombre de nœuds minimum | 5 (optimal) | 2 | 3 (2 + témoin) | +| Nombre de nœuds minimum | 3 ([5 optimal](https://ceph.io/en/news/blog/2019/part-3-rhcs-bluestore-performance-scalability-3-vs-5-nodes/)) | 2 | 3 (2 + témoin) | | Réseau recommandé | 10 Gbps | 1 Gbps | 1 Gbps (optimal 10 Gbps) | | Type de réplication | Synchrone | Asynchrone | Synchrone | | Live migration | Oui | Non | Oui | diff --git a/i18n/en/docusaurus-plugin-content-blog/2025-11-22-stockage-distribue-proxmox.md b/i18n/en/docusaurus-plugin-content-blog/2025-11-22-stockage-distribue-proxmox.md index 0782f42..c6a76ec 100644 --- a/i18n/en/docusaurus-plugin-content-blog/2025-11-22-stockage-distribue-proxmox.md +++ b/i18n/en/docusaurus-plugin-content-blog/2025-11-22-stockage-distribue-proxmox.md @@ -58,7 +58,7 @@ Ceph is the distributed storage solution most promoted by Proxmox. It's directly **Disadvantages:** - **High resource consumption**: significant CPU and RAM for MON, MGR, and OSD processes -- **Official recommendation of 5 nodes minimum** for optimal configuration (3 MON + OSD distribution) +- **3 nodes minimum, but [5 recommended](https://ceph.io/en/news/blog/2019/part-3-rhcs-bluestore-performance-scalability-3-vs-5-nodes/)** for optimal performance (near-linear scalability thanks to Ceph's scale-out architecture) - **Requires 10 Gbps network** for acceptable performance - High operational complexity despite Proxmox's simplification efforts @@ -107,7 +107,7 @@ Given my homelab's constraints, Linstor DRBD seemed to be the most suitable choi | Criterion | Ceph | ZFS Replication | Linstor DRBD | |-----------|------|-----------------|--------------| -| Minimum nodes | 5 (optimal) | 2 | 3 (2 + witness) | +| Minimum nodes | 3 ([5 optimal](https://ceph.io/en/news/blog/2019/part-3-rhcs-bluestore-performance-scalability-3-vs-5-nodes/)) | 2 | 3 (2 + witness) | | Recommended network | 10 Gbps | 1 Gbps | 1 Gbps (optimal 10 Gbps) | | Replication type | Synchronous | Asynchronous | Synchronous | | Live migration | Yes | No | Yes |