Compare commits

..

14 commits

Author SHA1 Message Date
2d680cec4e Ajout snippets cloud-init
Some checks failed
CD - Deploy Infrastructure / Terraform Validation (push) Successful in 16s
CD - Deploy Infrastructure / Deploy on pve1 (push) Failing after 7s
CD - Deploy Infrastructure / Deploy on pve2 (push) Failing after 7s
CD - Deploy Infrastructure / Deploy on pve3 (push) Failing after 8s
CD - Deploy Infrastructure / Validate K3s Cluster (push) Has been skipped
CD - Deploy Infrastructure / Deployment Notification (push) Failing after 1s
2025-12-09 13:44:57 +01:00
104df8d174 Ajout des disques cloud-init dans la configuration Terraform
Some checks failed
CD - Deploy Infrastructure / Terraform Validation (push) Successful in 17s
CD - Deploy Infrastructure / Deploy on pve1 (push) Failing after 30s
CD - Deploy Infrastructure / Deploy on pve2 (push) Failing after 27s
CD - Deploy Infrastructure / Deploy on pve3 (push) Failing after 42s
CD - Deploy Infrastructure / Validate K3s Cluster (push) Has been skipped
CD - Deploy Infrastructure / Deployment Notification (push) Failing after 1s
2025-12-09 13:15:51 +01:00
3b5f1fc2d2 feat: Configuration stockage local et token K3S partagé
Some checks failed
CD - Deploy Infrastructure / Terraform Validation (push) Successful in 17s
CD - Deploy Infrastructure / Deploy on pve1 (push) Successful in 2m12s
CD - Deploy Infrastructure / Deploy on pve2 (push) Successful in 2m11s
CD - Deploy Infrastructure / Deploy on pve3 (push) Successful in 2m28s
CD - Deploy Infrastructure / Validate K3s Cluster (push) Successful in 5m3s
CD - Deploy Infrastructure / Deployment Notification (push) Failing after 1s
- Passage stockage local-nvme pour acemagician et elitedesk (40G)
- Token K3S partagé via cloud-init pour cluster HA
- Configuration FluxCD avec GitRepository Forgejo
- Déploiement Hello World via FluxCD
- Manifestes Kubernetes pour application demo
2025-12-09 11:55:19 +01:00
a818aab4be fix(terraform): VMID fixe pour VMs afin éviter duplication
Some checks failed
CD - Deploy Infrastructure / Terraform Validation (push) Successful in 17s
CD - Deploy Infrastructure / Deploy on pve1 (push) Failing after 10s
CD - Deploy Infrastructure / Deploy on pve2 (push) Failing after 7s
CD - Deploy Infrastructure / Deploy on pve3 (push) Failing after 7s
CD - Deploy Infrastructure / Validate K3s Cluster (push) Has been skipped
CD - Deploy Infrastructure / Deployment Notification (push) Failing after 1s
Assigner VMID spécifique à chaque VM :
- k3s-server-1: 1000
- k3s-server-2: 1001
- etcd-witness: 1002
2025-11-26 19:41:52 +01:00
5f6df07fbe fix(terraform): Configuration nœuds cluster et stockage 2025-11-26 19:33:19 +01:00
155de75fbf feat(terraform): Mise à jour provider Proxmox v3.0.2-rc05
Mettre à jour version provider et ajuster syntaxe ressources pour compatibilité.
2025-11-26 19:31:03 +01:00
8c738e9e19 fix(cd): Ajout étape setup OpenTofu dans tous les jobs déploiement
Jobs de déploiement échouaient avec 'tofu: commande introuvable'. Ajout étape Setup OpenTofu aux jobs deploy-pve1, deploy-pve2, and deploy-pve3 jobs.
2025-11-13 20:03:49 +01:00
8687665946 fix(cd): Remplacement workflow réutilisable par jobs CI inline
Forgejo ne supporte pas complètement les workflows réutilisables (uses:). Duplication job validation Terraform directement dans workflow CD pour éviter état bloquant.
2025-11-13 20:00:53 +01:00
83f9b4def8 fix(ci): Ajout trigger workflow_call pour intégration CD
Le workflow CI nécessite workflow_call pour être appelable par workflow CD. Sans cela, le workflow CD ne peut pas invoquer CI comme workflow réutilisable.
2025-11-13 19:56:13 +01:00
dc5fc28ff1 fix(ci): Exclusion branche main du workflow CI
Workflow CI s'exécute maintenant uniquement sur branches feature et PRs. Sur main, seul le workflow CD s'exécute (qui appelle CI en interne). Ceci évite les exécutions CI dupliquées.
2025-11-13 19:52:52 +01:00
ae0f3754ad fix(ci): Utilisation variables environnement au lieu fichier tfvars
Suppression copie terraform.tfvars.example qui écrasait valeurs secrètes. Utilisation maintenant exclusivement variables environnement TF_VAR_* pour injecter correctement secrets depuis Forgejo.
2025-11-13 19:47:47 +01:00
c26289c262 fix(terraform): Mise à jour ID token dans exemple de terraform vers opentofu
Le fichier exemple avait root@pam!terraform en dur, ce qui écrasait la valeur secrète. Mis à jour pour correspondre au nom réel du token.
2025-11-13 19:45:17 +01:00
9cb0737560 fix(ci): Renommage secrets pour éviter restriction préfixe FORGEJO_
Forgejo n'autorise pas les noms de secrets commençant par FORGEJO_. Renommés :
- FORGEJO_TOKEN -> GIT_TOKEN
- FORGEJO_REPO_URL -> GIT_REPO_URL
2025-11-13 19:41:46 +01:00
1cdc40f96e fix(ci): Downgrade upload-artifact v3 pour compatibilité Forgejo
upload-artifact@v4 n'est pas supporté sur Forgejo/GHES. Downgrade vers v3 pour assurer que uploads artifacts fonctionnent correctement.
2025-11-13 19:30:40 +01:00
26 changed files with 591 additions and 156 deletions

View file

@ -2,8 +2,11 @@ name: CI - Validation
on: on:
push: push:
branches: ['**'] # All branches branches:
- '**'
- '!main' # Exclude main branch (CD workflow handles it)
pull_request: pull_request:
workflow_call: # Allow this workflow to be called by other workflows
jobs: jobs:
ci-terraform: ci-terraform:
@ -42,21 +45,33 @@ jobs:
echo "--- Planning $dir ---" echo "--- Planning $dir ---"
( (
cd "$dir" && \ cd "$dir" && \
cp ../terraform.tfvars.example terraform.tfvars && \
tofu init && \ tofu init && \
tofu plan -out="tfplan-$(basename $dir)" || echo "WARNING: Plan failed for $(basename $dir) - node may be unavailable" tofu plan -out="tfplan-$(basename $dir)" || echo "WARNING: Plan failed for $(basename $dir) - node may be unavailable"
) )
fi fi
done done
env: env:
TF_VAR_proxmox_api_url: "https://192.168.100.10:8006/api2/json"
TF_VAR_proxmox_token_id: ${{ secrets.PROXMOX_TOKEN_ID }} TF_VAR_proxmox_token_id: ${{ secrets.PROXMOX_TOKEN_ID }}
TF_VAR_proxmox_token_secret: ${{ secrets.PROXMOX_TOKEN_SECRET }} TF_VAR_proxmox_token_secret: ${{ secrets.PROXMOX_TOKEN_SECRET }}
TF_VAR_proxmox_tls_insecure: "true"
TF_VAR_ssh_public_key: ${{ secrets.SSH_PUBLIC_KEY }} TF_VAR_ssh_public_key: ${{ secrets.SSH_PUBLIC_KEY }}
TF_VAR_forgejo_token: ${{ secrets.FORGEJO_TOKEN }} TF_VAR_forgejo_token: ${{ secrets.GIT_TOKEN }}
TF_VAR_forgejo_repo_url: ${{ secrets.GIT_REPO_URL }}
TF_VAR_k3s_version: "v1.28.5+k3s1"
TF_VAR_ubuntu_template: "ubuntu-2404-cloudinit"
TF_VAR_storage_pool: "linstor_storage"
TF_VAR_snippets_storage: "local"
TF_VAR_k3s_network_bridge: "k3s"
TF_VAR_k3s_gateway: "10.100.20.1"
TF_VAR_k3s_dns: '["10.100.20.1", "1.1.1.1"]'
TF_VAR_k3s_server_1_config: '{ ip = "10.100.20.10/24", cores = 6, memory = 12288, disk_size = "100G" }'
TF_VAR_k3s_server_2_config: '{ ip = "10.100.20.20/24", cores = 6, memory = 12288, disk_size = "100G" }'
TF_VAR_etcd_witness_config: '{ ip = "10.100.20.30/24", cores = 2, memory = 2048, disk_size = "20G" }'
- name: Upload Terraform Plan - name: Upload Terraform Plan
if: github.event_name == 'push' && github.ref == 'refs/heads/main' if: github.event_name == 'push' && github.ref == 'refs/heads/main'
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: tfplans name: tfplans
path: terraform/pve*/tfplan-* path: terraform/pve*/tfplan-*

View file

@ -4,23 +4,82 @@ on:
push: push:
branches: branches:
- main - main
workflow_dispatch: # Allow manual trigger workflow_dispatch:
jobs: jobs:
# Run CI first ci-terraform:
ci: name: Terraform Validation
uses: ./.forgejo/workflows/ci.yml runs-on: self-hosted
secrets: inherit steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup OpenTofu
run: |
if ! command -v tofu &> /dev/null; then
curl -fsSL https://get.opentofu.org/install-opentofu.sh | bash -s -- --install-method standalone --opentofu-version 1.10.7
fi
- name: Terraform Format Check
run: |
cd terraform
tofu fmt -check -recursive
continue-on-error: false
- name: Terraform Validate
run: |
for dir in terraform/pve*; do
if [ -d "$dir" ]; then
echo "--- Validating $dir ---"
(cd "$dir" && tofu init -backend=false && tofu validate)
fi
done
- name: Terraform Plan
run: |
for dir in terraform/pve*; do
if [ -d "$dir" ]; then
echo "--- Planning $dir ---"
(
cd "$dir" && \
tofu init && \
tofu plan || echo "WARNING: Plan failed for $(basename $dir) - node may be unavailable"
)
fi
done
env:
TF_VAR_proxmox_api_url: "https://192.168.100.10:8006/api2/json"
TF_VAR_proxmox_token_id: ${{ secrets.PROXMOX_TOKEN_ID }}
TF_VAR_proxmox_token_secret: ${{ secrets.PROXMOX_TOKEN_SECRET }}
TF_VAR_proxmox_tls_insecure: "true"
TF_VAR_ssh_public_key: ${{ secrets.SSH_PUBLIC_KEY }}
TF_VAR_forgejo_token: ${{ secrets.GIT_TOKEN }}
TF_VAR_forgejo_repo_url: ${{ secrets.GIT_REPO_URL }}
TF_VAR_k3s_version: "v1.28.5+k3s1"
TF_VAR_ubuntu_template: "ubuntu-2404-cloudinit"
TF_VAR_storage_pool: "linstor_storage"
TF_VAR_snippets_storage: "local"
TF_VAR_k3s_network_bridge: "k3s"
TF_VAR_k3s_gateway: "10.100.20.1"
TF_VAR_k3s_dns: '["10.100.20.1", "1.1.1.1"]'
TF_VAR_k3s_token: ${{ secrets.K3S_TOKEN }}
TF_VAR_k3s_server_1_config: '{ ip = "10.100.20.10/24", cores = 6, memory = 12288, disk_size = "40G" }'
TF_VAR_k3s_server_2_config: '{ ip = "10.100.20.20/24", cores = 6, memory = 12288, disk_size = "40G" }'
TF_VAR_etcd_witness_config: '{ ip = "10.100.20.30/24", cores = 2, memory = 2048, disk_size = "20G" }'
# Deploy infrastructure in parallel
deploy-pve1: deploy-pve1:
name: Deploy on pve1 name: Deploy on pve1
runs-on: self-hosted runs-on: self-hosted
needs: ci needs: ci-terraform
continue-on-error: true continue-on-error: true
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Setup OpenTofu
run: |
if ! command -v tofu &> /dev/null; then
curl -fsSL https://get.opentofu.org/install-opentofu.sh | bash -s -- --install-method standalone --opentofu-version 1.10.7
fi
- name: Terraform Apply on pve1 - name: Terraform Apply on pve1
run: | run: |
cd terraform/pve1 cd terraform/pve1
@ -28,16 +87,18 @@ jobs:
proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}" proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}"
proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}" proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}"
ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}" ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}"
forgejo_token = "${{ secrets.FORGEJO_TOKEN }}" forgejo_token = "${{ secrets.GIT_TOKEN }}"
forgejo_repo_url = "${{ secrets.FORGEJO_REPO_URL }}" forgejo_repo_url = "${{ secrets.GIT_REPO_URL }}"
k3s_version = "v1.28.5+k3s1" k3s_version = "v1.28.5+k3s1"
ubuntu_template = "ubuntu-2404-cloudinit" k3s_token = "${{ secrets.K3S_TOKEN }}"
storage_pool = "linstor_storage" ubuntu_template = "ubuntu-2404-cloudinit"
snippets_storage = "local" storage_pool = "linstor_storage"
k3s_network_bridge = "k3s" k3s_server_1_storage_pool = "local-nvme"
k3s_gateway = "10.100.20.1" snippets_storage = "local"
k3s_dns = ["10.100.20.1", "1.1.1.1"] k3s_network_bridge = "k3s"
k3s_server_1_config = { ip = "10.100.20.10/24", cores = 6, memory = 12288, disk_size = "100G" } k3s_gateway = "10.100.20.1"
k3s_dns = ["10.100.20.1", "1.1.1.1"]
k3s_server_1_config = { ip = "10.100.20.10/24", cores = 6, memory = 12288, disk_size = "40G" }
EOF EOF
tofu init tofu init
tofu apply -auto-approve tofu apply -auto-approve
@ -45,28 +106,35 @@ jobs:
deploy-pve2: deploy-pve2:
name: Deploy on pve2 name: Deploy on pve2
runs-on: self-hosted runs-on: self-hosted
needs: ci needs: ci-terraform
continue-on-error: true continue-on-error: true
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Setup OpenTofu
run: |
if ! command -v tofu &> /dev/null; then
curl -fsSL https://get.opentofu.org/install-opentofu.sh | bash -s -- --install-method standalone --opentofu-version 1.10.7
fi
- name: Terraform Apply on pve2 - name: Terraform Apply on pve2
run: | run: |
cd terraform/pve2 cd terraform/pve2
cat > terraform.tfvars <<EOF cat > terraform.tfvars <<EOF
proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}" proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}"
proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}" proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}"
ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}" ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}"
forgejo_token = "${{ secrets.FORGEJO_TOKEN }}" forgejo_token = "${{ secrets.GIT_TOKEN }}"
forgejo_repo_url = "${{ secrets.FORGEJO_REPO_URL }}" forgejo_repo_url = "${{ secrets.GIT_REPO_URL }}"
k3s_version = "v1.28.5+k3s1" k3s_version = "v1.28.5+k3s1"
ubuntu_template = "ubuntu-2404-cloudinit" k3s_token = "${{ secrets.K3S_TOKEN }}"
storage_pool = "linstor_storage" ubuntu_template = "ubuntu-2404-cloudinit"
snippets_storage = "local" storage_pool = "linstor_storage"
k3s_network_bridge = "k3s" k3s_server_2_storage_pool = "local-nvme"
k3s_gateway = "10.100.20.1" snippets_storage = "local"
k3s_dns = ["10.100.20.1", "1.1.1.1"] k3s_network_bridge = "k3s"
k3s_server_2_config = { ip = "10.100.20.20/24", cores = 6, memory = 12288, disk_size = "100G" } k3s_gateway = "10.100.20.1"
k3s_dns = ["10.100.20.1", "1.1.1.1"]
k3s_server_2_config = { ip = "10.100.20.20/24", cores = 6, memory = 12288, disk_size = "40G" }
EOF EOF
tofu init tofu init
tofu apply -auto-approve tofu apply -auto-approve
@ -74,28 +142,35 @@ jobs:
deploy-pve3: deploy-pve3:
name: Deploy on pve3 name: Deploy on pve3
runs-on: self-hosted runs-on: self-hosted
needs: ci needs: ci-terraform
continue-on-error: true continue-on-error: true
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Setup OpenTofu
run: |
if ! command -v tofu &> /dev/null; then
curl -fsSL https://get.opentofu.org/install-opentofu.sh | bash -s -- --install-method standalone --opentofu-version 1.10.7
fi
- name: Terraform Apply on pve3 - name: Terraform Apply on pve3
run: | run: |
cd terraform/pve3 cd terraform/pve3
cat > terraform.tfvars <<EOF cat > terraform.tfvars <<EOF
proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}" proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}"
proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}" proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}"
ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}" ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}"
forgejo_token = "${{ secrets.FORGEJO_TOKEN }}" forgejo_token = "${{ secrets.GIT_TOKEN }}"
forgejo_repo_url = "${{ secrets.FORGEJO_REPO_URL }}" forgejo_repo_url = "${{ secrets.GIT_REPO_URL }}"
k3s_version = "v1.28.5+k3s1" k3s_version = "v1.28.5+k3s1"
ubuntu_template = "ubuntu-2404-cloudinit" k3s_token = "${{ secrets.K3S_TOKEN }}"
storage_pool = "linstor_storage" ubuntu_template = "ubuntu-2404-cloudinit"
snippets_storage = "local" storage_pool = "linstor_storage"
k3s_network_bridge = "k3s" etcd_witness_storage_pool = "local-lvm"
k3s_gateway = "10.100.20.1" snippets_storage = "local"
k3s_dns = ["10.100.20.1", "1.1.1.1"] k3s_network_bridge = "k3s"
etcd_witness_config = { ip = "10.100.20.30/24", cores = 2, memory = 2048, disk_size = "20G" } k3s_gateway = "10.100.20.1"
k3s_dns = ["10.100.20.1", "1.1.1.1"]
etcd_witness_config = { ip = "10.100.20.30/24", cores = 2, memory = 2048, disk_size = "20G" }
EOF EOF
tofu init tofu init
tofu apply -auto-approve tofu apply -auto-approve
@ -119,13 +194,12 @@ jobs:
- name: Wait for K3s cluster - name: Wait for K3s cluster
run: | run: |
echo "Waiting for K3s cluster to be ready..." echo "Waiting for K3s cluster to be ready..."
sleep 300 # Wait 5 minutes for ansible-pull to configure K3s sleep 300
- name: Check cluster status (optional) - name: Check cluster status
run: | run: |
echo "Cluster validation completed" echo "Cluster validation completed"
continue-on-error: true continue-on-error: true
# Notify on completion
notify: notify:
name: Deployment Notification name: Deployment Notification
runs-on: self-hosted runs-on: self-hosted

View file

@ -40,8 +40,8 @@ jobs:
proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}" proxmox_token_id = "${{ secrets.PROXMOX_TOKEN_ID }}"
proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}" proxmox_token_secret = "${{ secrets.PROXMOX_TOKEN_SECRET }}"
ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}" ssh_public_key = "${{ secrets.SSH_PUBLIC_KEY }}"
forgejo_token = "${{ secrets.FORGEJO_TOKEN }}" forgejo_token = "${{ secrets.GIT_TOKEN }}"
forgejo_repo_url = "${{ secrets.FORGEJO_REPO_URL }}" forgejo_repo_url = "${{ secrets.GIT_REPO_URL }}"
EOF EOF
tofu init tofu init

View file

@ -1,44 +1,32 @@
--- ---
# Global variables for all nodes
# K3s Configuration
k3s_version: "v1.28.5+k3s1" k3s_version: "v1.28.5+k3s1"
k3s_install_url: "https://get.k3s.io" k3s_install_url: "https://get.k3s.io"
# K3s Server Configuration
k3s_server_1_ip: "10.100.20.10" k3s_server_1_ip: "10.100.20.10"
k3s_server_2_ip: "10.100.20.20" k3s_server_2_ip: "10.100.20.20"
k3s_witness_ip: "10.100.20.30" k3s_witness_ip: "10.100.20.30"
# K3s token (shared between servers)
# In production, this should be stored in a vault
k3s_token_file: "/etc/rancher/k3s/token" k3s_token_file: "/etc/rancher/k3s/token"
# Network Configuration
pod_cidr: "10.42.0.0/16" pod_cidr: "10.42.0.0/16"
service_cidr: "10.43.0.0/16" service_cidr: "10.43.0.0/16"
cluster_dns: "10.43.0.10" cluster_dns: "10.43.0.10"
# System Configuration
timezone: "Europe/Paris" timezone: "Europe/Paris"
swap_enabled: false swap_enabled: false
# Unattended Upgrades Configuration
unattended_upgrades_enabled: true unattended_upgrades_enabled: true
unattended_upgrades_automatic_reboot: true unattended_upgrades_automatic_reboot: true
unattended_upgrades_automatic_reboot_with_users: false unattended_upgrades_automatic_reboot_with_users: false
# Reboot schedule (staggered to maintain availability)
reboot_schedule: reboot_schedule:
k3s-server-1: "02:00" k3s-server-1: "02:00"
k3s-server-2: "04:00" k3s-server-2: "04:00"
etcd-witness: "06:00" etcd-witness: "06:00"
# FluxCD Configuration
flux_version: "v2.2.0" flux_version: "v2.2.0"
flux_namespace: "flux-system" flux_namespace: "flux-system"
# System packages to install on all nodes
common_packages: common_packages:
- curl - curl
- wget - wget
@ -52,7 +40,6 @@ common_packages:
- python3 - python3
- python3-pip - python3-pip
# Kernel parameters for K3s
sysctl_config: sysctl_config:
net.bridge.bridge-nf-call-iptables: 1 net.bridge.bridge-nf-call-iptables: 1
net.bridge.bridge-nf-call-ip6tables: 1 net.bridge.bridge-nf-call-ip6tables: 1

View file

@ -1,19 +1,19 @@
--- ---
# etcd witness node configuration
# This node participates in etcd quorum but does not run K8s workloads
- name: Check if K3s is already installed - name: Check if K3s is already installed
stat: stat:
path: /usr/local/bin/k3s path: /usr/local/bin/k3s
register: k3s_binary register: k3s_binary
- name: Get K3s token from first server - name: Load K3s token from environment
set_fact: set_fact:
k3s_token: >- k3s_token: "{{ lookup('env', 'K3S_TOKEN') }}"
{{
lookup('file', k3s_token_file, errors='ignore') - name: Wait for first server API
| default('PLACEHOLDER') wait_for:
}} host: "{{ k3s_server_1_ip }}"
port: 6443
delay: 60
timeout: 900
- name: Install K3s as server (witness mode) - name: Install K3s as server (witness mode)
shell: > shell: >

View file

@ -1,19 +1,13 @@
#!/bin/bash #!/bin/bash
# K3s pre-reboot script
# Drains the node before system reboot to migrate workloads gracefully
set -e set -e
# Only run if k3s is active
if systemctl is-active --quiet k3s; then if systemctl is-active --quiet k3s; then
NODE_NAME=$(hostname) NODE_NAME=$(hostname)
echo "$(date): Starting pre-reboot drain for node $NODE_NAME" | logger -t k3s-pre-reboot echo "$(date): Starting pre-reboot drain for node $NODE_NAME" | logger -t k3s-pre-reboot
# Set KUBECONFIG
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# Drain the node (migrate pods to other nodes)
/usr/local/bin/k3s kubectl drain "$NODE_NAME" \ /usr/local/bin/k3s kubectl drain "$NODE_NAME" \
--ignore-daemonsets \ --ignore-daemonsets \
--delete-emptydir-data \ --delete-emptydir-data \

View file

@ -1,6 +1,4 @@
--- ---
# Install and configure FluxCD
- name: Check if flux is already installed - name: Check if flux is already installed
command: k3s kubectl get namespace {{ flux_namespace }} command: k3s kubectl get namespace {{ flux_namespace }}
register: flux_installed register: flux_installed
@ -44,9 +42,73 @@
changed_when: false changed_when: false
when: flux_installed.rc != 0 when: flux_installed.rc != 0
- name: Load Forgejo token from environment
set_fact:
forgejo_token: "{{ lookup('env', 'FORGEJO_TOKEN') }}"
forgejo_repo_url: "{{ lookup('env', 'REPO_URL') }}"
- name: Create Forgejo secret for FluxCD
shell: |
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
k3s kubectl create secret generic forgejo-auth \
--namespace={{ flux_namespace }} \
--from-literal=username=git \
--from-literal=password={{ forgejo_token }} \
--dry-run=client -o yaml | k3s kubectl apply -f -
when: flux_installed.rc != 0
- name: Create GitRepository manifest
copy:
dest: /tmp/gitrepository.yaml
content: |
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: infra
namespace: {{ flux_namespace }}
spec:
interval: 1m
url: {{ forgejo_repo_url }}
ref:
branch: main
secretRef:
name: forgejo-auth
mode: '0644'
when: flux_installed.rc != 0
- name: Apply GitRepository
shell: |
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
k3s kubectl apply -f /tmp/gitrepository.yaml
when: flux_installed.rc != 0
- name: Create Kustomization manifest
copy:
dest: /tmp/kustomization.yaml
content: |
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: apps
namespace: {{ flux_namespace }}
spec:
interval: 1m
sourceRef:
kind: GitRepository
name: infra
path: ./k8s
prune: true
wait: true
mode: '0644'
when: flux_installed.rc != 0
- name: Apply Kustomization
shell: |
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
k3s kubectl apply -f /tmp/kustomization.yaml
when: flux_installed.rc != 0
- name: Display FluxCD installation status - name: Display FluxCD installation status
debug: debug:
msg: >- msg: "FluxCD configured to sync from {{ forgejo_repo_url }}"
FluxCD installed successfully.
Configure GitRepository in kubernetes/flux-system/
when: flux_installed.rc != 0 when: flux_installed.rc != 0

View file

@ -1,6 +1,4 @@
--- ---
# K3s server installation and configuration
- name: Check if K3s is already installed - name: Check if K3s is already installed
stat: stat:
path: /usr/local/bin/k3s path: /usr/local/bin/k3s
@ -17,10 +15,15 @@
set_fact: set_fact:
is_first_server: "{{ ansible_default_ipv4.address == k3s_server_1_ip }}" is_first_server: "{{ ansible_default_ipv4.address == k3s_server_1_ip }}"
- name: Load K3s token from environment
set_fact:
k3s_token: "{{ lookup('env', 'K3S_TOKEN') }}"
- name: Install K3s on first server (cluster-init) - name: Install K3s on first server (cluster-init)
shell: > shell: >
curl -sfL {{ k3s_install_url }} | curl -sfL {{ k3s_install_url }} |
INSTALL_K3S_VERSION="{{ k3s_version }}" INSTALL_K3S_VERSION="{{ k3s_version }}"
K3S_TOKEN="{{ k3s_token }}"
sh -s - server sh -s - server
--cluster-init --cluster-init
--tls-san {{ k3s_server_1_ip }} --tls-san {{ k3s_server_1_ip }}
@ -44,17 +47,13 @@
timeout: 300 timeout: 300
when: is_first_server when: is_first_server
- name: Get K3s token from first server - name: Wait for first server API (second server)
slurp: wait_for:
src: /var/lib/rancher/k3s/server/node-token host: "{{ k3s_server_1_ip }}"
register: k3s_token_encoded port: 6443
when: is_first_server delay: 30
run_once: true timeout: 600
when: not is_first_server
- name: Save K3s token
set_fact:
k3s_token: "{{ k3s_token_encoded.content | b64decode | trim }}"
when: is_first_server
- name: Install K3s on second server (join cluster) - name: Install K3s on second server (join cluster)
shell: > shell: >
@ -62,7 +61,7 @@
INSTALL_K3S_VERSION="{{ k3s_version }}" INSTALL_K3S_VERSION="{{ k3s_version }}"
sh -s - server sh -s - server
--server https://{{ k3s_server_1_ip }}:6443 --server https://{{ k3s_server_1_ip }}:6443
--token {{ k3s_token | default('PLACEHOLDER') }} --token {{ k3s_token }}
--tls-san {{ k3s_server_2_ip }} --tls-san {{ k3s_server_2_ip }}
--write-kubeconfig-mode 644 --write-kubeconfig-mode 644
--disable traefik --disable traefik

View file

@ -1,14 +1,10 @@
--- ---
# Main playbook for K3s GitOps infrastructure
# This playbook is executed by ansible-pull on each VM
- name: Configure K3s Infrastructure - name: Configure K3s Infrastructure
hosts: localhost hosts: localhost
connection: local connection: local
become: true become: true
vars: vars:
# Read node role from file created by cloud-init
node_role: >- node_role: >-
{{ {{
lookup('file', '/etc/node-role', errors='ignore') lookup('file', '/etc/node-role', errors='ignore')
@ -34,14 +30,11 @@
cache_valid_time: 3600 cache_valid_time: 3600
roles: roles:
# Common role applies to all nodes
- role: common - role: common
# K3s server role (server + worker)
- role: k3s-server - role: k3s-server
when: node_role == 'server' when: node_role == 'server'
# etcd witness role (etcd only, no k8s workloads)
- role: etcd-witness - role: etcd-witness
when: node_role == 'witness' when: node_role == 'witness'

View file

@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: demo
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: bashofmann/rancher-demo:1.0.0
imagePullPolicy: Always
resources:
requests:
memory: "12Mi"
cpu: "2m"
ports:
- containerPort: 8080
name: web
env:
- name: COW_COLOR
value: purple
readinessProbe:
httpGet:
path: /
port: web
livenessProbe:
httpGet:
path: /
port: web

View file

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: demo

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: hello-world-service
namespace: demo
spec:
type: LoadBalancer
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
targetPort: 8080

34
snippets/README.md Normal file
View file

@ -0,0 +1,34 @@
# Cloud-Init Snippets pour Proxmox
## Avant l'upload
Remplace les placeholders dans chaque fichier YAML :
- `YOUR_SSH_PUBLIC_KEY` : Ta clé SSH publique
- `YOUR_FORGEJO_REPO_URL` : URL du dépôt Forgejo (ex: https://forgejo.tellserv.fr/Tellsanguis/Homelab.git)
- `YOUR_FORGEJO_TOKEN` : Token Forgejo
- `YOUR_K3S_TOKEN` : Token K3S cluster
## Upload via interface Proxmox
### acemagician (k3s-server-1)
1. Proxmox → acemagician → Datacenter → Storage → local
2. Content → Snippets → Upload
3. Upload `cloud-init-k3s-server-1.yaml`
### elitedesk (k3s-server-2)
1. Proxmox → elitedesk → Datacenter → Storage → local
2. Content → Snippets → Upload
3. Upload `cloud-init-k3s-server-2.yaml`
### thinkpad (etcd-witness)
1. Proxmox → thinkpad → Datacenter → Storage → local
2. Content → Snippets → Upload
3. Upload `cloud-init-etcd-witness.yaml`
## Vérification
Après upload, les fichiers doivent être présents dans :
- `/var/lib/vz/snippets/cloud-init-k3s-server-1.yaml` (acemagician)
- `/var/lib/vz/snippets/cloud-init-k3s-server-2.yaml` (elitedesk)
- `/var/lib/vz/snippets/cloud-init-etcd-witness.yaml` (thinkpad)

View file

@ -0,0 +1,50 @@
package_upgrade: true
packages:
- ansible
- git
- curl
- wget
- ca-certificates
- gnupg
- lsb-release
users:
- name: ansible
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- YOUR_SSH_PUBLIC_KEY
groups: sudo
timezone: Europe/Paris
write_files:
- path: /etc/node-role
content: witness
permissions: "0644"
- path: /etc/ansible-pull.conf
content: |
REPO_URL=YOUR_FORGEJO_REPO_URL
FORGEJO_TOKEN=YOUR_FORGEJO_TOKEN
K3S_VERSION=v1.28.5+k3s1
K3S_TOKEN=YOUR_K3S_TOKEN
permissions: "0600"
- path: /usr/local/bin/ansible-pull-wrapper.sh
content: |
#!/bin/bash
set -e
source /etc/ansible-pull.conf
export K3S_TOKEN
export FORGEJO_TOKEN
export REPO_URL
WORK_DIR="/var/lib/ansible-local"
mkdir -p $WORK_DIR
cd $WORK_DIR
REPO_WITH_AUTH=$(echo $REPO_URL | sed "s|https://|https://git:$FORGEJO_TOKEN@|")
if [ -d ".git" ]; then
git pull origin main 2>&1 | logger -t ansible-pull
else
git clone $REPO_WITH_AUTH . 2>&1 | logger -t ansible-pull
fi
ansible-playbook ansible/site.yml -i localhost, --connection=local -e "k3s_version=$K3S_VERSION" 2>&1 | logger -t ansible-pull
permissions: "0755"
runcmd:
- echo '*/15 * * * * root /usr/local/bin/ansible-pull-wrapper.sh' > /etc/cron.d/ansible-pull
- sleep 60 && /usr/local/bin/ansible-pull-wrapper.sh &

View file

@ -0,0 +1,50 @@
package_upgrade: true
packages:
- ansible
- git
- curl
- wget
- ca-certificates
- gnupg
- lsb-release
users:
- name: ansible
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- YOUR_SSH_PUBLIC_KEY
groups: sudo
timezone: Europe/Paris
write_files:
- path: /etc/node-role
content: server
permissions: "0644"
- path: /etc/ansible-pull.conf
content: |
REPO_URL=YOUR_FORGEJO_REPO_URL
FORGEJO_TOKEN=YOUR_FORGEJO_TOKEN
K3S_VERSION=v1.28.5+k3s1
K3S_TOKEN=YOUR_K3S_TOKEN
permissions: "0600"
- path: /usr/local/bin/ansible-pull-wrapper.sh
content: |
#!/bin/bash
set -e
source /etc/ansible-pull.conf
export K3S_TOKEN
export FORGEJO_TOKEN
export REPO_URL
WORK_DIR="/var/lib/ansible-local"
mkdir -p $WORK_DIR
cd $WORK_DIR
REPO_WITH_AUTH=$(echo $REPO_URL | sed "s|https://|https://git:$FORGEJO_TOKEN@|")
if [ -d ".git" ]; then
git pull origin main 2>&1 | logger -t ansible-pull
else
git clone $REPO_WITH_AUTH . 2>&1 | logger -t ansible-pull
fi
ansible-playbook ansible/site.yml -i localhost, --connection=local -e "k3s_version=$K3S_VERSION" 2>&1 | logger -t ansible-pull
permissions: "0755"
runcmd:
- echo '*/15 * * * * root /usr/local/bin/ansible-pull-wrapper.sh' > /etc/cron.d/ansible-pull
- sleep 60 && /usr/local/bin/ansible-pull-wrapper.sh &

View file

@ -0,0 +1,50 @@
package_upgrade: true
packages:
- ansible
- git
- curl
- wget
- ca-certificates
- gnupg
- lsb-release
users:
- name: ansible
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- YOUR_SSH_PUBLIC_KEY
groups: sudo
timezone: Europe/Paris
write_files:
- path: /etc/node-role
content: server
permissions: "0644"
- path: /etc/ansible-pull.conf
content: |
REPO_URL=YOUR_FORGEJO_REPO_URL
FORGEJO_TOKEN=YOUR_FORGEJO_TOKEN
K3S_VERSION=v1.28.5+k3s1
K3S_TOKEN=YOUR_K3S_TOKEN
permissions: "0600"
- path: /usr/local/bin/ansible-pull-wrapper.sh
content: |
#!/bin/bash
set -e
source /etc/ansible-pull.conf
export K3S_TOKEN
export FORGEJO_TOKEN
export REPO_URL
WORK_DIR="/var/lib/ansible-local"
mkdir -p $WORK_DIR
cd $WORK_DIR
REPO_WITH_AUTH=$(echo $REPO_URL | sed "s|https://|https://git:$FORGEJO_TOKEN@|")
if [ -d ".git" ]; then
git pull origin main 2>&1 | logger -t ansible-pull
else
git clone $REPO_WITH_AUTH . 2>&1 | logger -t ansible-pull
fi
ansible-playbook ansible/site.yml -i localhost, --connection=local -e "k3s_version=$K3S_VERSION" 2>&1 | logger -t ansible-pull
permissions: "0755"
runcmd:
- echo '*/15 * * * * root /usr/local/bin/ansible-pull-wrapper.sh' > /etc/cron.d/ansible-pull
- sleep 60 && /usr/local/bin/ansible-pull-wrapper.sh &

View file

@ -27,6 +27,9 @@ locals {
#!/bin/bash #!/bin/bash
set -e set -e
source /etc/ansible-pull.conf source /etc/ansible-pull.conf
export K3S_TOKEN
export FORGEJO_TOKEN
export REPO_URL
WORK_DIR="/var/lib/ansible-local" WORK_DIR="/var/lib/ansible-local"
mkdir -p $WORK_DIR mkdir -p $WORK_DIR
cd $WORK_DIR cd $WORK_DIR
@ -48,7 +51,7 @@ locals {
}, },
{ {
path = "/etc/ansible-pull.conf" path = "/etc/ansible-pull.conf"
content = "REPO_URL=${var.forgejo_repo_url}\nFORGEJO_TOKEN=${var.forgejo_token}\nK3S_VERSION=${var.k3s_version}" content = "REPO_URL=${var.forgejo_repo_url}\nFORGEJO_TOKEN=${var.forgejo_token}\nK3S_VERSION=${var.k3s_version}\nK3S_TOKEN=${var.k3s_token}"
permissions = "0600" permissions = "0600"
}, },
{ {

View file

@ -4,7 +4,7 @@ terraform {
required_providers { required_providers {
proxmox = { proxmox = {
source = "telmate/proxmox" source = "telmate/proxmox"
version = "~> 2.9" version = "3.0.2-rc05"
} }
local = { local = {
source = "hashicorp/local" source = "hashicorp/local"
@ -20,32 +20,44 @@ provider "proxmox" {
pm_tls_insecure = var.proxmox_tls_insecure pm_tls_insecure = var.proxmox_tls_insecure
} }
# K3s Server VM on pve1 # K3s Server VM on acemagician
resource "proxmox_vm_qemu" "k3s_server_1" { resource "proxmox_vm_qemu" "k3s_server_1" {
vmid = 1000
name = "k3s-server-1" name = "k3s-server-1"
target_node = "pve1" target_node = "acemagician"
clone = var.ubuntu_template clone = var.ubuntu_template
full_clone = true
cores = var.k3s_server_1_config.cores cpu {
sockets = 1 cores = var.k3s_server_1_config.cores
memory = var.k3s_server_1_config.memory sockets = 1
agent = 1 }
memory = var.k3s_server_1_config.memory
agent = 1
boot = "order=scsi0" boot = "order=scsi0"
scsihw = "virtio-scsi-single" scsihw = "virtio-scsi-single"
onboot = true onboot = true
network { network {
id = 0
model = "virtio" model = "virtio"
bridge = var.k3s_network_bridge bridge = var.k3s_network_bridge
} }
disk { disk {
slot = 0 slot = "scsi0"
size = var.k3s_server_1_config.disk_size size = var.k3s_server_1_config.disk_size
type = "scsi" type = "disk"
storage = var.storage_pool storage = var.k3s_server_1_storage_pool
iothread = 1 iothread = true
}
disk {
slot = "ide2"
type = "cloudinit"
storage = var.k3s_server_1_storage_pool
} }
ipconfig0 = "ip=${var.k3s_server_1_config.ip},gw=${var.k3s_gateway}" ipconfig0 = "ip=${var.k3s_server_1_config.ip},gw=${var.k3s_gateway}"

View file

@ -53,6 +53,12 @@ variable "storage_pool" {
type = string type = string
} }
variable "k3s_server_1_storage_pool" {
description = "Storage pool for k3s-server-1 disk (local-nvme for acemagician)"
type = string
default = "local-nvme"
}
variable "snippets_storage" { variable "snippets_storage" {
description = "Proxmox storage for cloud-init snippets" description = "Proxmox storage for cloud-init snippets"
type = string type = string
@ -82,3 +88,9 @@ variable "k3s_server_1_config" {
disk_size = string disk_size = string
}) })
} }
variable "k3s_token" {
description = "K3s cluster token"
type = string
sensitive = true
}

View file

@ -27,6 +27,9 @@ locals {
#!/bin/bash #!/bin/bash
set -e set -e
source /etc/ansible-pull.conf source /etc/ansible-pull.conf
export K3S_TOKEN
export FORGEJO_TOKEN
export REPO_URL
WORK_DIR="/var/lib/ansible-local" WORK_DIR="/var/lib/ansible-local"
mkdir -p $WORK_DIR mkdir -p $WORK_DIR
cd $WORK_DIR cd $WORK_DIR
@ -48,7 +51,7 @@ locals {
}, },
{ {
path = "/etc/ansible-pull.conf" path = "/etc/ansible-pull.conf"
content = "REPO_URL=${var.forgejo_repo_url}\nFORGEJO_TOKEN=${var.forgejo_token}\nK3S_VERSION=${var.k3s_version}" content = "REPO_URL=${var.forgejo_repo_url}\nFORGEJO_TOKEN=${var.forgejo_token}\nK3S_VERSION=${var.k3s_version}\nK3S_TOKEN=${var.k3s_token}"
permissions = "0600" permissions = "0600"
}, },
{ {

View file

@ -4,7 +4,7 @@ terraform {
required_providers { required_providers {
proxmox = { proxmox = {
source = "telmate/proxmox" source = "telmate/proxmox"
version = "~> 2.9" version = "3.0.2-rc05"
} }
local = { local = {
source = "hashicorp/local" source = "hashicorp/local"
@ -20,32 +20,44 @@ provider "proxmox" {
pm_tls_insecure = var.proxmox_tls_insecure pm_tls_insecure = var.proxmox_tls_insecure
} }
# K3s Server VM on pve2 # K3s Server VM on elitedesk
resource "proxmox_vm_qemu" "k3s_server_2" { resource "proxmox_vm_qemu" "k3s_server_2" {
vmid = 1001
name = "k3s-server-2" name = "k3s-server-2"
target_node = "pve2" target_node = "elitedesk"
clone = var.ubuntu_template clone = var.ubuntu_template
full_clone = true
cores = var.k3s_server_2_config.cores cpu {
sockets = 1 cores = var.k3s_server_2_config.cores
memory = var.k3s_server_2_config.memory sockets = 1
agent = 1 }
memory = var.k3s_server_2_config.memory
agent = 1
boot = "order=scsi0" boot = "order=scsi0"
scsihw = "virtio-scsi-single" scsihw = "virtio-scsi-single"
onboot = true onboot = true
network { network {
id = 0
model = "virtio" model = "virtio"
bridge = var.k3s_network_bridge bridge = var.k3s_network_bridge
} }
disk { disk {
slot = 0 slot = "scsi0"
size = var.k3s_server_2_config.disk_size size = var.k3s_server_2_config.disk_size
type = "scsi" type = "disk"
storage = var.storage_pool storage = var.k3s_server_2_storage_pool
iothread = 1 iothread = true
}
disk {
slot = "ide2"
type = "cloudinit"
storage = var.k3s_server_2_storage_pool
} }
ipconfig0 = "ip=${var.k3s_server_2_config.ip},gw=${var.k3s_gateway}" ipconfig0 = "ip=${var.k3s_server_2_config.ip},gw=${var.k3s_gateway}"

View file

@ -53,6 +53,12 @@ variable "storage_pool" {
type = string type = string
} }
variable "k3s_server_2_storage_pool" {
description = "Storage pool for k3s-server-2 disk (local-nvme for elitedesk)"
type = string
default = "local-nvme"
}
variable "snippets_storage" { variable "snippets_storage" {
description = "Proxmox storage for cloud-init snippets" description = "Proxmox storage for cloud-init snippets"
type = string type = string
@ -82,3 +88,9 @@ variable "k3s_server_2_config" {
disk_size = string disk_size = string
}) })
} }
variable "k3s_token" {
description = "K3s cluster token"
type = string
sensitive = true
}

View file

@ -27,6 +27,9 @@ locals {
#!/bin/bash #!/bin/bash
set -e set -e
source /etc/ansible-pull.conf source /etc/ansible-pull.conf
export K3S_TOKEN
export FORGEJO_TOKEN
export REPO_URL
WORK_DIR="/var/lib/ansible-local" WORK_DIR="/var/lib/ansible-local"
mkdir -p $WORK_DIR mkdir -p $WORK_DIR
cd $WORK_DIR cd $WORK_DIR
@ -48,7 +51,7 @@ locals {
}, },
{ {
path = "/etc/ansible-pull.conf" path = "/etc/ansible-pull.conf"
content = "REPO_URL=${var.forgejo_repo_url}\nFORGEJO_TOKEN=${var.forgejo_token}\nK3S_VERSION=${var.k3s_version}" content = "REPO_URL=${var.forgejo_repo_url}\nFORGEJO_TOKEN=${var.forgejo_token}\nK3S_VERSION=${var.k3s_version}\nK3S_TOKEN=${var.k3s_token}"
permissions = "0600" permissions = "0600"
}, },
{ {

View file

@ -4,7 +4,7 @@ terraform {
required_providers { required_providers {
proxmox = { proxmox = {
source = "telmate/proxmox" source = "telmate/proxmox"
version = "~> 2.9" version = "3.0.2-rc05"
} }
local = { local = {
source = "hashicorp/local" source = "hashicorp/local"
@ -20,32 +20,44 @@ provider "proxmox" {
pm_tls_insecure = var.proxmox_tls_insecure pm_tls_insecure = var.proxmox_tls_insecure
} }
# etcd Witness VM on pve3 # etcd Witness VM on thinkpad
resource "proxmox_vm_qemu" "etcd_witness" { resource "proxmox_vm_qemu" "etcd_witness" {
vmid = 1002
name = "etcd-witness" name = "etcd-witness"
target_node = "pve3" target_node = "thinkpad"
clone = var.ubuntu_template clone = var.ubuntu_template
full_clone = true
cores = var.etcd_witness_config.cores cpu {
sockets = 1 cores = var.etcd_witness_config.cores
memory = var.etcd_witness_config.memory sockets = 1
agent = 1 }
memory = var.etcd_witness_config.memory
agent = 1
boot = "order=scsi0" boot = "order=scsi0"
scsihw = "virtio-scsi-single" scsihw = "virtio-scsi-single"
onboot = true onboot = true
network { network {
id = 0
model = "virtio" model = "virtio"
bridge = var.k3s_network_bridge bridge = var.k3s_network_bridge
} }
disk { disk {
slot = 0 slot = "scsi0"
size = var.etcd_witness_config.disk_size size = var.etcd_witness_config.disk_size
type = "scsi" type = "disk"
storage = var.storage_pool storage = var.etcd_witness_storage_pool
iothread = 1 iothread = true
}
disk {
slot = "ide2"
type = "cloudinit"
storage = var.etcd_witness_storage_pool
} }
ipconfig0 = "ip=${var.etcd_witness_config.ip},gw=${var.k3s_gateway}" ipconfig0 = "ip=${var.etcd_witness_config.ip},gw=${var.k3s_gateway}"

View file

@ -53,6 +53,12 @@ variable "storage_pool" {
type = string type = string
} }
variable "etcd_witness_storage_pool" {
description = "Proxmox storage pool for etcd witness VM disk (thinkpad uses local storage)"
type = string
default = "local-lvm"
}
variable "snippets_storage" { variable "snippets_storage" {
description = "Proxmox storage for cloud-init snippets" description = "Proxmox storage for cloud-init snippets"
type = string type = string
@ -82,3 +88,9 @@ variable "etcd_witness_config" {
disk_size = string disk_size = string
}) })
} }
variable "k3s_token" {
description = "K3s cluster token"
type = string
sensitive = true
}

View file

@ -1,44 +1,36 @@
# Copy this file to terraform.tfvars and fill in your values
# Proxmox Configuration
proxmox_api_url = "https://192.168.100.10:8006/api2/json" proxmox_api_url = "https://192.168.100.10:8006/api2/json"
proxmox_token_id = "root@pam!terraform" proxmox_token_id = "root@pam!opentofu"
proxmox_token_secret = "your-proxmox-token-secret" proxmox_token_secret = "your-proxmox-token-secret"
proxmox_tls_insecure = true proxmox_tls_insecure = true
# SSH Access
ssh_public_key = "ssh-ed25519 AAAAC3... your-email@example.com" ssh_public_key = "ssh-ed25519 AAAAC3... your-email@example.com"
# Forgejo Configuration
forgejo_token = "your-forgejo-token" forgejo_token = "your-forgejo-token"
forgejo_repo_url = "ssh://git@forgejo.tellserv.fr:222/Tellsanguis/infra.git" forgejo_repo_url = "ssh://git@forgejo.tellserv.fr:222/Tellsanguis/infra.git"
# K3s Version
k3s_version = "v1.28.5+k3s1" k3s_version = "v1.28.5+k3s1"
k3s_token = "your-k3s-cluster-token"
# Template and Storage
ubuntu_template = "ubuntu-2404-cloudinit" ubuntu_template = "ubuntu-2404-cloudinit"
storage_pool = "linstor_storage" storage_pool = "linstor_storage"
snippets_storage = "local" snippets_storage = "local"
# Network
k3s_network_bridge = "k3s" k3s_network_bridge = "k3s"
k3s_gateway = "10.100.20.1" k3s_gateway = "10.100.20.1"
k3s_dns = ["10.100.20.1", "1.1.1.1"] k3s_dns = ["10.100.20.1", "1.1.1.1"]
# VM Configurations
k3s_server_1_config = { k3s_server_1_config = {
ip = "10.100.20.10/24" ip = "10.100.20.10/24"
cores = 6 cores = 6
memory = 12288 memory = 12288
disk_size = "100G" disk_size = "40G"
} }
k3s_server_2_config = { k3s_server_2_config = {
ip = "10.100.20.20/24" ip = "10.100.20.20/24"
cores = 6 cores = 6
memory = 12288 memory = 12288
disk_size = "100G" disk_size = "40G"
} }
etcd_witness_config = { etcd_witness_config = {