Ajout documentation projets OpenClassrooms (P02-P13) avec support bilingue

- Add all project documentation pages in French and English
- Include PDF viewers for presentations and documents (P10, P12)
- Add collapsible sections for scripts and logs (P10)
- Add static assets for all projects
- Update sidebars with new projets-openclassrooms category
- Add npm start:en script for testing English locale
This commit is contained in:
Tellsanguis 2025-11-22 16:18:20 +01:00
parent 40a8985942
commit ed989ff004
86 changed files with 24243 additions and 1 deletions

View file

@ -0,0 +1,24 @@
# OpenClassrooms Projects
This section brings together the **12 technical projects** completed as part of my **Systems, Networks and Security Administrator** training at OpenClassrooms (November 2024 - November 2025).
Each project corresponds to a professional simulation with concrete deliverables: technical documentation, configurations, scripts, presentations.
---
## Overview
| Project | Topic | Key Technologies |
|---------|-------|------------------|
| P2 | ITSM Management | GLPI, ITIL |
| P3 | Network Architecture | VLAN, Firewall, Draw.io |
| P4 | N-tier Architecture | Docker, LAMP, DNS |
| P5 | Web Security | Apache, Fail2ban, SSL, vsftpd |
| P6 | Remote Site | VPN IPsec, AD DS, RODC, GPO |
| P7 | Cisco Network | VLAN, ACL, NAT, IPv6, Packet Tracer |
| P8 | Monitoring | Nagios, Rsyslog |
| P9 | Fleet Management | Ansible, GLPI, AGDLP |
| P10 | Backups | Bash, Rsync, Cron |
| P11 | ANSSI Compliance | IS Mapping, Architecture |
| P12 | AD Security Audit | Pentesting, Mimikatz, Kerberoasting |
| P13 | Cloud Migration | AWS, TAD, Gantt |

View file

@ -0,0 +1,81 @@
---
sidebar_position: 2
---
# P2 - Daily Request Management
## Context
Implementation of a request and incident management system following ITIL best practices, using the GLPI tool.
## Objectives
- Configure and use GLPI for ticket management
- Apply ITIL methodology for incident and request handling
- Set up automated IT inventory
- Create processing procedures and flowcharts
## Technologies Used
- **GLPI**: asset management and ticketing
- **GLPI Agent**: automated inventory
- **ITIL**: IT service management methodology
## Deliverables
<details>
<summary>GLPI Database Export (SQL)</summary>
The SQL file is large (complete GLPI database export). Here is an excerpt of its structure:
```sql
-- MariaDB dump 10.19 Distrib 10.11.6-MariaDB, for debian-linux-gnu (x86_64)
--
-- Host: localhost Database: glpi
-- ------------------------------------------------------
-- Server version 10.11.6-MariaDB-0+deb12u1
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;
-- Table structure for table `glpi_agents`
CREATE TABLE `glpi_agents` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`deviceid` varchar(255) NOT NULL,
`entities_id` int(10) unsigned NOT NULL DEFAULT 0,
`name` varchar(255) DEFAULT NULL,
`agenttypes_id` int(10) unsigned NOT NULL,
`last_contact` timestamp NULL DEFAULT NULL,
`version` varchar(255) DEFAULT NULL,
-- ... other columns
PRIMARY KEY (`id`),
UNIQUE KEY `deviceid` (`deviceid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
[Download complete SQL file](/assets/projets-oc/p02/bene_mael_1_export_122024.sql)
</details>
<details>
<summary>GLPI Agent Presentation (PDF)</summary>
<iframe src="/assets/projets-oc/p02/bene_mael_3_agent_GLPI_122024.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
<details>
<summary>Flowcharts - Request Processing Workflows (PDF)</summary>
<iframe src="/assets/projets-oc/p02/bene_mael_4_logigramme_122024.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
## Skills Acquired
- ITSM tool configuration
- Application of ITIL processes (incident, request, problem management)
- Technical procedure documentation
- Automated inventory implementation

View file

@ -0,0 +1,65 @@
---
sidebar_position: 3
---
# P3 - Enterprise Network Design
## Context
Complete network architecture design for a startup (Hill Start), including physical and logical plans, IP addressing and security rules.
## Objectives
- Design a multi-VLAN network architecture adapted to business needs
- Develop IP addressing plans
- Define firewall filtering rules
- Produce complete technical documentation (TAD)
## Technologies Used
- **VLAN**: network segmentation
- **Firewall**: inter-VLAN filtering rules
- **Draw.io**: architecture diagrams
- **Subnetting**: IPv4 addressing plans
## Deliverables
<details>
<summary>Physical Diagram</summary>
![Physical network architecture diagram](/assets/projets-oc/p03/schemaphysique.jpg)
</details>
<details>
<summary>Logical Diagram</summary>
![Logical network architecture diagram](/assets/projets-oc/p03/schemalogique.jpg)
</details>
<details>
<summary>IP Addressing Plan (Excel)</summary>
The Excel file contains the complete IP addressing plan.
[Download addressing plan](/assets/projets-oc/p03/plan_adressagereseau.xlsx)
</details>
<details>
<summary>Firewall Rules (Excel)</summary>
The Excel file contains inter-VLAN firewall filtering rules.
[Download firewall rules](/assets/projets-oc/p03/regles_firewall.xlsx)
</details>
## Skills Acquired
- Organization network requirements analysis
- Segmented LAN architecture design
- Subnet calculation and addressing plans
- Standardized technical documentation writing
- Network security policy definition

View file

@ -0,0 +1,148 @@
---
sidebar_position: 4
---
# P4 - Docker N-tier Architecture
## Context
Deployment of a containerized n-tier architecture for BeeSafe company, including a web server, database and DNS server.
## Objectives
- Containerize a LAMP web application
- Configure a DNS server with Bind9
- Set up a reverse proxy
- Document the technical architecture
## Technologies Used
- **Docker / Docker Compose**: containerization
- **Apache/PHP**: web server
- **MySQL**: database
- **Bind9**: DNS server
## Deployed Architecture
```
+-------------+
| Client |
+------+------+
|
+------v------+
| DNS Bind9 |
+------+------+
|
+------v------+
| Apache |
| + PHP |
+------+------+
|
+------v------+
| MySQL |
+-------------+
```
## Deliverables
<details>
<summary>Architecture Diagram (PDF)</summary>
<iframe src="/assets/projets-oc/p04/schema_archi_ntiers.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
<details>
<summary>Docker Compose</summary>
```yaml
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: apache_php
ports:
- "80:80"
volumes:
- ./web:/var/www/html
- ./apache/beesafe.conf:/etc/apache2/sites-available/beesafe.conf
depends_on:
- db
- dns
networks:
- backend
restart: unless-stopped
db:
image: mysql:8.0
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: rootclassroom
MYSQL_DATABASE: beesafe_db
volumes:
- db_data:/var/lib/mysql
- ./sql:/docker-entrypoint-initdb.d
networks:
- backend
restart: unless-stopped
dns:
image: internetsystemsconsortium/bind9:9.18
container_name: bind9
ports:
- "53:53/tcp"
- "53:53/udp"
volumes:
- ./bind9/etc:/etc/bind
- ./bind9/cache:/var/cache/bind
- ./bind9/lib:/var/lib/bind
- ./bind9/log:/var/log
command: ["-g"]
networks:
- backend
restart: unless-stopped
networks:
backend:
driver: bridge
volumes:
db_data:
```
</details>
<details>
<summary>Dockerfile</summary>
```dockerfile
FROM php:8.0-apache
# Update and install dependencies
RUN apt-get update && apt-get install -y \
libzip-dev \
unzip \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable mysqli
# Enable beesafe.conf site and disable default 000-default.conf site
RUN a2ensite beesafe.conf && \
a2dissite 000-default.conf && \
service apache2 reload
# Clean unnecessary files to reduce image size
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Command to keep Apache running
CMD ["apache2-foreground"]
```
</details>
## Skills Acquired
- Multi-tier application containerization
- DNS server configuration
- Orchestration with Docker Compose
- Decoupled application architecture

View file

@ -0,0 +1,193 @@
---
sidebar_position: 5
---
# P5 - Web Services Security
## Context
Securing Rainbow Bank's web infrastructure: HTTPS implementation, attack protection, and encrypted FTP server configuration.
## Objectives
- Configure Apache with SSL/TLS (HTTPS)
- Implement attack protection (Fail2ban, mod_evasive)
- Deploy a secure FTP server (vsftpd)
- Document security configurations
## Technologies Used
- **Apache**: web server with mod_ssl, mod_evasive
- **Let's Encrypt / SSL Certificates**: HTTPS encryption
- **Fail2ban**: brute-force protection
- **vsftpd**: secure FTP server (FTPS)
- **Netplan**: multi-NIC network configuration
## Key Configurations
### HTTPS VirtualHost with HSTS
```apache
<VirtualHost *:443>
ServerName extranet.rainbowbank.com
SSLEngine on
SSLCertificateFile /etc/ssl/certs/extranet.crt
SSLCertificateKeyFile /etc/ssl/private/extranet.key
Header always set Strict-Transport-Security "max-age=31536000"
</VirtualHost>
```
### Fail2ban Protection
```ini
[apache-auth]
enabled = true
port = http,https
filter = apache-auth
maxretry = 3
bantime = 3600
```
## Deliverables
<details>
<summary>Web Services Configuration (ZIP)</summary>
Archive containing all web configuration files.
[Download configuration archive](/assets/projets-oc/p05/bene_mael_1_config_service_web_022025.zip)
</details>
<details>
<summary>vsftpd Configuration</summary>
```ini
listen=YES
listen_ipv6=NO
anonymous_enable=NO
local_enable=YES
write_enable=YES
chroot_local_user=YES
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
rsa_cert_file=/etc/ssl/certs/rainbowbank.com.crt
rsa_private_key_file=/etc/ssl/private/rainbowbank.com.key
pasv_enable=YES
pasv_min_port=10000
pasv_max_port=10100
log_ftp_protocol=YES
xferlog_enable=YES
xferlog_std_format=NO
xferlog_file=/var/log/vsftpd.log
dual_log_enable=YES
```
</details>
<details>
<summary>Fail2ban Configuration (jail.local)</summary>
```ini
[DEFAULT]
backend = auto
banaction = iptables-multiport
protocol = tcp
chain = INPUT
action = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
[apache-custom]
enabled = true
port = http,https,5501,5502
filter = apache-custom
logpath = /var/log/apache2/*_access.log
maxretry = 3
findtime = 300
bantime = 300
[nginx-custom]
enabled = true
port = http,https,5501,5502
filter = nginx-custom
logpath = /var/log/nginx/access.log
maxretry = 3
findtime = 300
bantime = 300
[vsftpd-custom]
enabled = true
port = ftp,ftp-data,ftps,ftps-data
filter = vsftpd-custom
logpath = /var/log/vsftpd.log
maxretry = 3
findtime = 300
bantime = 300
```
</details>
<details>
<summary>iptables Rules</summary>
```bash
# Generated by iptables-save v1.8.10 (nf_tables) on Tue Feb 18 18:27:58 2025
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [2:240]
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i ens33 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i ens34 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i ens34 -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -i ens35 -p tcp -m tcp --dport 5501 -j ACCEPT
-A INPUT -i ens35 -p tcp -m tcp --dport 5502 -j ACCEPT
-A INPUT -i ens35 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i ens35 -p tcp -m tcp --dport 21 -j ACCEPT
-A INPUT -i ens35 -p tcp -m tcp --dport 10000:10100 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A INPUT -j LOG --log-prefix "IPTables-Dropped: "
-A FORWARD -i ens34 -o ens33 -j ACCEPT
-A FORWARD -i ens35 -o ens33 -j ACCEPT
-A FORWARD -i ens33 -o ens34 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i ens33 -o ens35 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -o ens33 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT
COMMIT
# Completed on Tue Feb 18 18:27:58 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Tue Feb 18 18:27:58 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -o ens33 -j MASQUERADE
COMMIT
# Completed on Tue Feb 18 18:27:58 2025
```
</details>
## Skills Acquired
- SSL/TLS certificate deployment
- Apache web server hardening
- Protection system configuration (IPS)
- Secure FTP service implementation
- Multi-interface network management

View file

@ -0,0 +1,310 @@
---
sidebar_position: 6
---
# P6 - Remote Site Connection
## Context
Integration of a remote site into the existing information system via site-to-site VPN, with deployment of a Read-Only Domain Controller (RODC) and application of Group Policies.
## Objectives
- Configure a site-to-site IPsec VPN with pfSense
- Deploy a RODC (Read-Only Domain Controller)
- Extend Active Directory to the remote site
- Apply GPOs adapted to the remote context
- Set up automated backups
## Technologies Used
- **pfSense**: firewall and IPsec VPN
- **Windows Server**: AD DS, RODC
- **Active Directory**: centralized identity management
- **GPO**: Group Policies
- **PowerShell**: backup scripts (Robocopy)
- **VMware**: virtualization
## Architecture
```
Main Site Remote Site
+-------------+ +-------------+
| DC | | RODC |
| (AD DS) | | (Read |
+------+------+ | Only) |
| +------+------+
+------v------+ VPN IPsec +------v------+
| pfSense |<--------------->| pfSense |
+-------------+ +-------------+
```
## Deliverables
<details>
<summary>GPO Work Hours Script (PowerShell)</summary>
```powershell
<#
.DESCRIPTION
Script to set login hours from 6am to 8pm every day of the week
.NOTES
Creation date: 17/03/2025
.AUTHOR
BENE Mael
.VERSION
1.0
#>
# Recursive retrieval of users (includes subgroup members)
$users = Get-ADGroupMember -Identity OpenBank -Recursive | Select-Object -ExpandProperty SamAccountName
# Create 21-byte array (168 hours in a week)
$LogonHours = New-Object byte[] 21
# Sunday = index 0, Monday = index 1, ..., Saturday = index 6
# Set login hours (6am to 8pm) for all days of the week
for ($day = 0; $day -le 6; $day++) { # Sunday (0) to Saturday (6)
for ($hour = 5; $hour -lt 19; $hour++) { # From 6am to 8pm
$byteIndex = [math]::Floor(($day * 24 + $hour) / 8)
$bitIndex = ($day * 24 + $hour) % 8
$LogonHours[$byteIndex] = $LogonHours[$byteIndex] -bor (1 -shl $bitIndex)
}
}
# Apply restriction to user
foreach ($user in $users)
{
Set-ADUser -Identity $user -Replace @{logonHours=$LogonHours}
}
```
</details>
<details>
<summary>GPO Work Hours Screenshot</summary>
![GPO work hours](/assets/projets-oc/p06/BENE_Mael_gpo_horairesdetravail.png)
</details>
<details>
<summary>GPO Flux Installation Script (Batch)</summary>
```batch
@echo off
REM User verification
if "%username%"=="agarcia" (
echo Installing flux-setup.exe for %username%
winget install -e --id flux.flux --silent --accept-package-agreements --accept-source-agreements
) else (
echo Installation not applicable for this user.
exit /b
)
```
</details>
<details>
<summary>GPO Flux Installation Screenshot</summary>
![GPO Flux installation](/assets/projets-oc/p06/BENE_Mael_gpo_installflux.png)
</details>
<details>
<summary>GPO Removable Disk Restriction Screenshot</summary>
![GPO removable disk restriction](/assets/projets-oc/p06/BENE_Mael_gpo_restrictiondisqueamovible.png)
</details>
<details>
<summary>pfSense Nantes VPN Configuration (XML)</summary>
```xml
<ipsec>
<client></client>
<phase1>
<ikeid>1</ikeid>
<iketype>ikev2</iketype>
<interface>opt1</interface>
<remote-gateway>194.0.0.1</remote-gateway>
<protocol>inet</protocol>
<myid_type>address</myid_type>
<myid_data>194.0.0.2</myid_data>
<peerid_type>address</peerid_type>
<peerid_data>194.0.0.1</peerid_data>
<encryption>
<item>
<encryption-algorithm>
<name>aes</name>
<keylen>256</keylen>
</encryption-algorithm>
<hash-algorithm>sha256</hash-algorithm>
<prf-algorithm>sha256</prf-algorithm>
<dhgroup>14</dhgroup>
</item>
</encryption>
<lifetime>28800</lifetime>
<pre-shared-key>bc4b31bbe6ac6eba857a44b8941ed31389cdb6c678635384b676ae34</pre-shared-key>
<authentication_method>pre_shared_key</authentication_method>
<descr><![CDATA[Tunnel to Paris]]></descr>
<nat_traversal>on</nat_traversal>
<mobike>off</mobike>
<dpd_delay>10</dpd_delay>
<dpd_maxfail>5</dpd_maxfail>
</phase1>
<phase2>
<ikeid>1</ikeid>
<uniqid>67cf001195fba</uniqid>
<mode>tunnel</mode>
<reqid>1</reqid>
<localid>
<type>network</type>
<address>10.0.2.0</address>
<netbits>24</netbits>
</localid>
<remoteid>
<type>network</type>
<address>10.0.1.0</address>
<netbits>24</netbits>
</remoteid>
<protocol>esp</protocol>
<encryption-algorithm-option>
<name>aes</name>
<keylen>256</keylen>
</encryption-algorithm-option>
<hash-algorithm-option>hmac_sha256</hash-algorithm-option>
<pfsgroup>14</pfsgroup>
<lifetime>3600</lifetime>
<pinghost>10.0.1.1</pinghost>
<keepalive>disabled</keepalive>
<descr><![CDATA[LAN Paris-Nantes traffic]]></descr>
</phase2>
</ipsec>
```
</details>
<details>
<summary>pfSense Paris VPN Configuration (XML)</summary>
```xml
<ipsec>
<client></client>
<phase1>
<ikeid>1</ikeid>
<iketype>ikev2</iketype>
<interface>opt1</interface>
<remote-gateway>194.0.0.2</remote-gateway>
<protocol>inet</protocol>
<myid_type>address</myid_type>
<myid_data>194.0.0.1</myid_data>
<peerid_type>address</peerid_type>
<peerid_data>194.0.0.2</peerid_data>
<encryption>
<item>
<encryption-algorithm>
<name>aes</name>
<keylen>256</keylen>
</encryption-algorithm>
<hash-algorithm>sha256</hash-algorithm>
<prf-algorithm>sha256</prf-algorithm>
<dhgroup>14</dhgroup>
</item>
</encryption>
<lifetime>28800</lifetime>
<pre-shared-key>bc4b31bbe6ac6eba857a44b8941ed31389cdb6c678635384b676ae34</pre-shared-key>
<authentication_method>pre_shared_key</authentication_method>
<descr><![CDATA[Tunnel to Nantes]]></descr>
<nat_traversal>on</nat_traversal>
<mobike>off</mobike>
<dpd_delay>10</dpd_delay>
<dpd_maxfail>5</dpd_maxfail>
</phase1>
<phase2>
<ikeid>1</ikeid>
<uniqid>67ceff22aa6e4</uniqid>
<mode>tunnel</mode>
<reqid>1</reqid>
<localid>
<type>network</type>
<address>10.0.1.0</address>
<netbits>24</netbits>
</localid>
<remoteid>
<type>network</type>
<address>10.0.2.0</address>
<netbits>24</netbits>
</remoteid>
<protocol>esp</protocol>
<encryption-algorithm-option>
<name>aes</name>
<keylen>256</keylen>
</encryption-algorithm-option>
<hash-algorithm-option>hmac_sha256</hash-algorithm-option>
<pfsgroup>14</pfsgroup>
<lifetime>3600</lifetime>
<pinghost>10.0.2.1</pinghost>
<keepalive>disabled</keepalive>
<descr><![CDATA[LAN Paris-Nantes traffic]]></descr>
</phase2>
</ipsec>
```
</details>
<details>
<summary>PowerShell Backup Script (Robocopy)</summary>
```powershell
<#
.DESCRIPTION
Script to copy data from drive D to G:\Mon Drive\projet6
.NOTES
Creation date: 17/03/2025
.AUTHOR
BENE Mael
.VERSION
1.1
#>
# Source and destination paths
$SourcePath = "D:\"
$DestinationPath = "G:\Mon Drive\projet6"
# Copy files with Robocopy
Write-Host "Copying data from $SourcePath to $DestinationPath..." -ForegroundColor Cyan
try {
Robocopy.exe "$SourcePath" "$DestinationPath" /E /COPY:DAT /R:2 /W:5 /MT:8 /XD "System Volume Information" "$RECYCLE.BIN" "Recovery" # Added exceptions for system files
# Detailed result display
switch ($LASTEXITCODE) {
0 { Write-Host "No files copied - All files were already synchronized." -ForegroundColor Green }
1 { Write-Host "Files copied successfully." -ForegroundColor Green }
2 { Write-Host "Additional files detected." -ForegroundColor Yellow }
4 { Write-Host "Mismatched files detected." -ForegroundColor Yellow }
8 { Write-Host "Copy errors detected." -ForegroundColor Red }
16 { Write-Host "Serious copy error." -ForegroundColor Red }
default { Write-Host "Robocopy exit code: $LASTEXITCODE" -ForegroundColor Magenta }
}
} catch {
Write-Host "Error executing Robocopy: $_" -ForegroundColor Red
}
Write-Host "Operation completed." -ForegroundColor Cyan
```
</details>
## Skills Acquired
- Site-to-site IPsec VPN tunnel configuration
- RODC deployment and management
- Active Directory infrastructure extension
- GPO design for remote sites
- Backup automation with PowerShell

View file

@ -0,0 +1,70 @@
---
sidebar_position: 7
---
# P7 - Cisco Equipment Configuration
## Context
Complete configuration of a Cisco network infrastructure: VLANs, ACLs, link aggregation, NAT/PAT and IPv6 addressing.
## Objectives
- Configure VLANs and inter-VLAN routing
- Implement ACLs for traffic filtering
- Configure link aggregation (EtherChannel)
- Implement NAT/PAT for Internet access
- Deploy dual-stack IPv6 addressing
## Technologies Used
- **Cisco IOS**: equipment operating system
- **VLAN / Trunk**: network segmentation
- **ACL**: Access Control Lists
- **EtherChannel (LACP)**: link aggregation
- **NAT/PAT**: address translation
- **IPv6**: next-generation addressing
- **Packet Tracer**: network simulation
## Configuration Example - ACL
```cisco
ip access-list extended VLAN10_TO_SERVERS
permit tcp 10.0.10.0 0.0.0.255 host 10.0.20.10 eq 80
permit tcp 10.0.10.0 0.0.0.255 host 10.0.20.10 eq 443
permit icmp 10.0.10.0 0.0.0.255 10.0.20.0 0.0.0.255
deny ip any any log
```
## Deliverables
<details>
<summary>Configuration Documentation (PDF)</summary>
<iframe src="/assets/projets-oc/p07/bene_mael_1_config_equipements_052025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
<details>
<summary>Packet Tracer Lab</summary>
Cisco Packet Tracer network simulation file (.pkt).
[Download Packet Tracer lab](/assets/projets-oc/p07/bene_mael_2_maquette_packet_tracer_052025.pkt)
</details>
<details>
<summary>Recommendations (PDF)</summary>
<iframe src="/assets/projets-oc/p07/bene_mael_3_preconisations_052025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
## Skills Acquired
- Advanced Cisco equipment configuration
- VLAN design and implementation
- ACL writing and application
- Link aggregation configuration
- NAT/PAT and IPv6 mastery

View file

@ -0,0 +1,73 @@
---
sidebar_position: 8
---
# P8 - Monitoring with Nagios
## Context
Implementation of a monitoring solution for MediaSante: Nagios deployment with custom probes and log centralization with Rsyslog.
## Objectives
- Install and configure Nagios Core
- Create custom monitoring probes
- Centralize logs with Rsyslog
- Define SLA indicators and produce reports
## Technologies Used
- **Nagios Core**: infrastructure monitoring
- **NRPE**: remote probe execution
- **Rsyslog**: log centralization
- **SNMP**: network monitoring
## Configured Probes
| Service | Warning Threshold | Critical Threshold | Operator Action |
|---------|-------------------|-------------------|-----------------|
| CPU | > 80% | > 95% | Identify consuming processes |
| RAM | > 85% | > 95% | Check memory leaks |
| Disk | > 80% | > 90% | Cleanup or extension |
| HTTP | latency > 2s | unavailable | Service restart |
| MySQL | connections > 80% | > 95% | Query analysis |
## Deliverables
<details>
<summary>Nagios Configuration (screenshot)</summary>
![Nagios Configuration](/assets/projets-oc/p08/BENE_Mael_1_config_nagios_062025.png)
</details>
<details>
<summary>Rsyslog Configuration (archive)</summary>
Archive containing Rsyslog configuration files for log centralization.
[Download Rsyslog configuration archive](/assets/projets-oc/p08/BENE_Mael_2_config_Rsyslog_062025.tar.gz)
</details>
<details>
<summary>SLA Indicators (PDF)</summary>
<iframe src="/assets/projets-oc/p08/BENE_Mael_3_indicateurs_062025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
<details>
<summary>Probes Documentation (PDF)</summary>
<iframe src="/assets/projets-oc/p08/BENE_Mael_4_documentation_062025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
## Skills Acquired
- Monitoring solution deployment
- Custom probe creation
- Log centralization and analysis
- Performance indicator definition (KPI/SLA)
- Availability report production

View file

@ -0,0 +1,235 @@
---
sidebar_position: 9
---
# P9 - Fleet Management with Ansible
## Context
Automation of IT fleet management for Barzini company: multi-OS deployment with Ansible, GLPI integration and AGDLP architecture implementation.
## Objectives
- Automate administration tasks with Ansible
- Manage a heterogeneous fleet (Windows/Linux)
- Integrate inventory with GLPI
- Implement an AGDLP permissions architecture
## Technologies Used
- **Ansible**: multi-OS automation
- **GLPI**: fleet management and inventory
- **Active Directory**: identity management (AGDLP)
- **PowerShell / Bash**: complementary scripts
## Playbook Examples
### Multi-OS Update
```yaml
- name: Linux Update
hosts: linux
become: yes
tasks:
- name: Update apt cache and upgrade
apt:
update_cache: yes
upgrade: dist
- name: Windows Update
hosts: windows
tasks:
- name: Install Windows updates
win_updates:
category_names:
- SecurityUpdates
- CriticalUpdates
```
### CIFS Share Mount
```yaml
- name: Mount Windows Share
ansible.posix.mount:
path: /mnt/share
src: "//server/share"
fstype: cifs
opts: "credentials=/root/.smbcredentials,uid=1000"
state: mounted
```
## Deliverables
<details>
<summary>Ansible Report (PDF)</summary>
<iframe src="/assets/projets-oc/p09/rapport_ansible.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
<details>
<summary>Ansible Playbooks (ZIP)</summary>
Archive containing all Ansible playbooks for the project.
[Download Ansible playbooks](/assets/projets-oc/p09/ansible.zip)
</details>
<details>
<summary>Linux Share Mount Script (Bash)</summary>
```bash
#!/bin/bash
# ============================================================================
# Script : mount_shares.sh
# Version : 1.0
# Date : 14/07/2025
# Author : BENE Mael
# Description: Automatic mounting of personal and group CIFS shares
# ============================================================================
DOMAIN="BARZINI.INTERNAL"
SERVER="SRV-AD"
user="$(id -un)"
uid="$(id -u)"
gid="$(id -g)"
groups="$(id -Gn)"
# Fixed list of available group shares
share_names=("Admins" "Audio" "Commercial" "Direction" "Developpeurs" "Graphisme" "Responsables" "Tests")
# Personal share mount
home_share="//${SERVER}/${user}\$"
home_mount="${user_home}/Dossier_perso"
echo "Mounting personal folder: $home_share"
if [ ! -d "$home_mount" ]; then
mkdir -p "$home_mount"
chown "$uid:$gid" "$home_mount"
fi
if ! mountpoint -q "$home_mount"; then
sudo mount -t cifs -o "sec=krb5,cruid=${user},uid=${uid},gid=${gid},nofail" "$home_share" "$home_mount" && \
echo "Personal share mounted on $home_mount" || \
echo "Failed to mount personal share"
else
echo "Already mounted: $home_mount"
fi
# Group share mounting
for share in "${share_names[@]}"; do
for grp in $groups; do
clean_grp=$(echo "$grp" | tr '[:upper:]' '[:lower:]')
clean_share=$(echo "$share" | tr '[:upper:]' '[:lower:]')
if [[ "$clean_grp" == *"$clean_share"* ]]; then
share_path="//${SERVER}/${share}"
mount_point="${user_home}/${share}"
echo "Attempting to mount $share_path"
if [ ! -d "$mount_point" ]; then
mkdir -p "$mount_point"
chown "$uid:$gid" "$mount_point"
fi
if ! mountpoint -q "$mount_point"; then
sudo mount -t cifs -o "sec=krb5,cruid=${user},uid=${uid},gid=${gid},nofail" "$share_path" "$mount_point" && \
echo "Share mounted: $mount_point" || \
echo "Failed to mount: $share_path"
else
echo "Already mounted: $mount_point"
fi
break
fi
done
done
```
</details>
<details>
<summary>Windows Share Mount Script (PowerShell)</summary>
```powershell
# ============================================================================
# Script : MapDrives.ps1
# Version : 1.1
# Date : 29/07/2025
# Author : BENE Mael
# Description: Automatic mounting of personal and group network shares
# ============================================================================
# Function to remove accents (normalization)
function Remove-Accents($text) {
$normalized = [System.Text.NormalizationForm]::FormD
$string = [System.String]::new($text).Normalize($normalized)
$sb = New-Object System.Text.StringBuilder
foreach ($c in $string.ToCharArray()) {
if (-not [Globalization.CharUnicodeInfo]::GetUnicodeCategory($c).ToString().StartsWith("NonSpacingMark")) {
[void]$sb.Append($c)
}
}
return $sb.ToString().Normalize([System.Text.NormalizationForm]::FormC)
}
# Mapping table without accents in keys
$groupShareMap = @{
"G_Admins" = "Admins"
"G_Audio" = "Audio"
"G_Commercial" = "Commercial"
"G_Direction" = "Direction"
"G_Developpeurs" = "Developpeurs"
"G_Graphisme" = "Graphisme"
"G_Responsables" = "Responsables"
"G_Testeurs" = "Tests"
}
# Get user and AD groups
$user = $env:USERNAME
$userGroupsRaw = ([System.Security.Principal.WindowsIdentity]::GetCurrent()).Groups | ForEach-Object {
$_.Translate([System.Security.Principal.NTAccount]).Value.Split('\')[-1]
}
# Normalize group names
$userGroups = @()
foreach ($grp in $userGroupsRaw) {
$grpNorm = Remove-Accents $grp
$userGroups += $grpNorm
}
# Personal share mount
$homeShare = "\\SRV-AD\$user`$"
Write-Host "Attempting to mount: $homeShare"
net use * $homeShare /persistent:no
if ($LASTEXITCODE -eq 0) {
Write-Host "Personal share mounted successfully."
} else {
Write-Host "Failed to mount personal share."
}
# Group share mounting
foreach ($group in $userGroups) {
if ($groupShareMap.ContainsKey($group)) {
$shareName = $groupShareMap[$group]
$sharePath = "\\SRV-AD\$shareName"
Write-Host "Attempting to mount: $sharePath (via group $group)"
net use * $sharePath /persistent:no
if ($LASTEXITCODE -eq 0) {
Write-Host "Share $shareName mounted successfully."
} else {
Write-Host "Failed to mount $shareName."
}
}
}
```
</details>
## Skills Acquired
- Cross-platform automation with Ansible
- Centralized IT fleet management
- AGDLP permissions architecture
- Management tool integration (GLPI)
- Using Ansible Vault for secrets

View file

@ -0,0 +1,639 @@
---
sidebar_position: 10
---
# P10 - Robust Backup Solution
## Context
Design and implementation of a complete backup solution for a city hall: Bash scripts with rsync supporting FULL, incremental and differential modes.
## Objectives
- Develop parameterizable backup scripts
- Implement the three backup modes (FULL/INC/DIFF)
- Set up backup rotation and retention
- Create restoration scripts
- Automate via cron
## Technologies Used
- **Bash**: scripting
- **Rsync**: file synchronization
- **SSH**: secure remote transfer
- **Cron**: task scheduling
## Backup Types Comparison
### FULL Backup (Complete)
Complete copy of all data at each execution.
| Advantages | Disadvantages |
|------------|---------------|
| Simple and fast restoration (single set) | Consumes a lot of disk space |
| Independent of previous backups | Long execution time |
| Maximum reliability | High bandwidth if remote |
### Incremental Backup (INC)
Copies only files modified since the **last backup** (FULL or INC).
| Advantages | Disadvantages |
|------------|---------------|
| Very fast to execute | Complex restoration (FULL + all INCs) |
| Minimal disk space | Dependency on complete chain |
| Low bandwidth | If one INC is corrupted, following ones are unusable |
### Differential Backup (DIFF)
Copies only files modified since the **last FULL**.
| Advantages | Disadvantages |
|------------|---------------|
| Simple restoration (FULL + last DIFF) | Size grows over time |
| Faster than FULL | Slower than INC |
| Fewer dependencies than INC | Requires more space than INC |
### Comparison Table
| Criteria | FULL | INC | DIFF |
|----------|------|-----|------|
| Backup time | Long | Short | Medium |
| Space used | Large | Minimal | Growing |
| Restoration time | Short | Long | Medium |
| Restoration complexity | Low | High | Medium |
| Fault tolerance | Excellent | Low | Good |
## Script Architecture
```
backup/
├── backup.sh # Main script
├── restore.sh # Restoration script
├── config/
│ └── backup.conf # Configuration
├── logs/
│ └── backup_YYYYMMDD.log
└── data/
├── FULL_20250801/
├── INC_20250802/
└── latest -> INC_20250802/
```
## Deliverables
### Presentation
<details>
<summary>Presentation Slides (PDF)</summary>
<iframe src="/assets/projets-oc/p10/Bene_Mael_1_support_presentation_082025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
### Backup Scripts
<details>
<summary>sauvegarde_inc.sh - Incremental Backup</summary>
```bash
#!/bin/bash
# Author: BENE Mael
# Version: 1.2
# Description: Incremental backup with rotation, latest link, and automatic FULL management via folder name
set -euo pipefail
# Check parameters
if [ "$#" -lt 2 ]; then
echo "Usage: $0 \"FOLDER1 FOLDER2 ...\" RETENTION_DAYS"
exit 1
fi
# Parameters
DOSSIERS="$1"
RETENTION_JOURS="$2"
# Configuration
SOURCE_DIR="$HOME/mairie"
DEST_USER="backup-user"
DEST_HOST="stockage"
DEST_BASE="/home/$DEST_USER/backup"
LOG_DIR="$HOME/backup-logs"
DATE="$(date '+%Y-%m-%d_%H-%M-%S')"
CUMULATIVE_LOG="$LOG_DIR/sauvegardes_inc.log"
mkdir -p "$LOG_DIR"
# Log header
{
echo "====================================================="
echo "[$(date '+%F %T')] > START INCREMENTAL BACKUP"
echo "Backed up folders: $DOSSIERS"
echo "Planned retention: $RETENTION_JOURS day(s)"
echo "Start timestamp: $DATE"
echo "====================================================="
} >> "$CUMULATIVE_LOG"
# SSH connection check
if ! ssh -q "$DEST_USER@$DEST_HOST" exit; then
echo "Error: unable to connect to $DEST_USER@$DEST_HOST"
exit 2
fi
for dossier in $DOSSIERS; do
echo "-----------------------------------------------------" >> "$CUMULATIVE_LOG"
echo "[$(date '+%F %T')] > Processing folder: $dossier" >> "$CUMULATIVE_LOG"
# Detect last FULL within retention period
LAST_FULL=$(ssh "$DEST_USER@$DEST_HOST" "find '$DEST_BASE/$dossier' -maxdepth 1 -type d -name '*_FULL' -mtime -$RETENTION_JOURS 2>/dev/null" | sort -r | head -n 1)
FORCE_FULL=0
TYPE_SUFFIX=""
if [ -z "$LAST_FULL" ]; then
FORCE_FULL=1
TYPE_SUFFIX="_FULL"
echo "[$(date '+%F %T')] > No recent FULL found -> BACKUP TYPE: FULL" >> "$CUMULATIVE_LOG"
else
TYPE_SUFFIX="_INC"
echo "[$(date '+%F %T')] > Backup TYPE: INCREMENTAL (base: $LAST_FULL)" >> "$CUMULATIVE_LOG"
fi
BACKUP_ID="${DATE}${TYPE_SUFFIX}"
DEST_PATH="$DEST_BASE/$dossier/$BACKUP_ID"
# Create destination folder
ssh "$DEST_USER@$DEST_HOST" "mkdir -p '$DEST_PATH'" >> "$CUMULATIVE_LOG" 2>&1
# rsync with or without link-dest
if [ "$FORCE_FULL" -eq 1 ]; then
rsync -av --delete -e ssh "$SOURCE_DIR/$dossier/" "$DEST_USER@$DEST_HOST:$DEST_PATH/" \
>> "$CUMULATIVE_LOG" 2>&1
else
rsync -av --delete --link-dest="$LAST_FULL" -e ssh "$SOURCE_DIR/$dossier/" "$DEST_USER@$DEST_HOST:$DEST_PATH/" \
>> "$CUMULATIVE_LOG" 2>&1
fi
echo "[$(date '+%F %T')] > End of backup for $dossier" >> "$CUMULATIVE_LOG"
# Update latest symbolic link
ssh "$DEST_USER@$DEST_HOST" bash -c "'
cd \"$DEST_BASE/$dossier\"
ln -sfn \"$BACKUP_ID\" latest
'" >> "$CUMULATIVE_LOG" 2>&1
# Rotation: keep $RETENTION_JOURS most recent (all types)
ssh "$DEST_USER@$DEST_HOST" bash -c "'
cd \"$DEST_BASE/$dossier\"
ls -1dt 20* | tail -n +$((RETENTION_JOURS + 1)) | xargs -r rm -rf
'" >> "$CUMULATIVE_LOG" 2>&1
done
echo "[$(date '+%F %T')] DAILY BACKUP COMPLETED" >> "$CUMULATIVE_LOG"
echo >> "$CUMULATIVE_LOG"
```
</details>
<details>
<summary>sauvegarde_dif.sh - Differential Backup</summary>
```bash
#!/bin/bash
# Author: BENE Mael
# Version: 1.1
# Description: Differential backup with execution time in logs
set -euo pipefail
# Configuration
DOSSIER="MACHINES"
SOURCE_DIR="$HOME/mairie/$DOSSIER"
DEST_USER="backup-user"
DEST_HOST="stockage"
DEST_PATH="/home/$DEST_USER/backup/$DOSSIER"
LOG_DIR="$HOME/backup-logs"
DATE="$(date '+%Y-%m-%d_%H-%M-%S')"
CUMULATIVE_LOG="$LOG_DIR/sauvegardes_dif.log"
mkdir -p "$LOG_DIR"
start=0
rsync_started=false
# Function executed even on crash or interruption
on_exit() {
if $rsync_started; then
local end=$(date +%s)
local duration=$((end - start))
echo "[$(date '+%F %T')] > Backup duration: ${duration} seconds" >> "$CUMULATIVE_LOG"
fi
}
trap on_exit EXIT
# Start log
{
echo "====================================================="
echo "[$(date '+%F %T')] > START DIFFERENTIAL BACKUP"
echo "Folder : $DOSSIER"
echo "Source : $SOURCE_DIR"
echo "Destination : $DEST_USER@$DEST_HOST:$DEST_PATH"
echo "Timestamp : $DATE"
echo "====================================================="
} >> "$CUMULATIVE_LOG"
# Prepare remote folder
echo "[$(date '+%F %T')] > Checking remote folder..." >> "$CUMULATIVE_LOG"
ssh "$DEST_USER@$DEST_HOST" "mkdir -p '$DEST_PATH'" >> "$CUMULATIVE_LOG" 2>&1
echo "[$(date '+%F %T')] > Remote folder ready." >> "$CUMULATIVE_LOG"
# Time measurement
start=$(date +%s)
rsync_started=true
# Launch rsync
echo "[$(date '+%F %T')] > Launching rsync..." >> "$CUMULATIVE_LOG"
rsync -av --inplace --partial --append -e ssh "$SOURCE_DIR/" "$DEST_USER@$DEST_HOST:$DEST_PATH/" \
>> "$CUMULATIVE_LOG" 2>&1
# If rsync finished normally, continue logging
echo "[$(date '+%F %T')] DIFFERENTIAL BACKUP COMPLETED" >> "$CUMULATIVE_LOG"
echo >> "$CUMULATIVE_LOG"
```
</details>
### Restoration Scripts
<details>
<summary>restore_inc.sh - Incremental Restoration</summary>
```bash
#!/bin/bash
# Author: BENE Mael
# Version: 1.1
# Description: Interactive restoration of a folder or individual file (improved version with logging)
set -euo pipefail
# Configuration
DEST_USER="backup-user"
DEST_HOST="stockage"
DEST_BASE="/home/$DEST_USER/backup"
BASE_RESTORE_DIR="/home/oclassroom/mairie"
LOG_FILE="/home/oclassroom/backup-logs/restores_inc.log"
# Log function
log_header() {
local type="$1" # "Complete folder" or "Specific file"
{
echo "====================================================="
echo "[$START_DATE] > START INCREMENTAL RESTORATION"
echo "Restored folder: $DOSSIER"
echo "Type: $type"
echo "Backup timestamp: $BACKUP_TIMESTAMP"
echo "====================================================="
} >> "$LOG_FILE"
}
# List available folders (excluding MACHINES)
DIR_LIST=$(ssh "$DEST_USER@$DEST_HOST" "ls -1 $DEST_BASE" | grep -v '^MACHINES$')
if [ -z "$DIR_LIST" ]; then
echo "No backup folder found."
exit 1
fi
echo "Folders available for restoration:"
DIR_ARRAY=()
i=1
while read -r line; do
echo " $i) $line"
DIR_ARRAY+=("$line")
((i++))
done <<< "$DIR_LIST"
read -rp "Folder number to restore: " DIR_NUM
DOSSIER="${DIR_ARRAY[$((DIR_NUM - 1))]}"
# List available backups
BACKUP_LIST=$(ssh "$DEST_USER@$DEST_HOST" "ls -1dt $DEST_BASE/$DOSSIER/20*_* 2>/dev/null")
if [ -z "$BACKUP_LIST" ]; then
echo "No backup found for $DOSSIER."
exit 1
fi
echo "Available backups for '$DOSSIER':"
BACKUP_ARRAY=()
i=1
while read -r line; do
SHORT=$(echo "$line" | sed "s|$DEST_BASE/||")
echo " $i) $SHORT"
BACKUP_ARRAY+=("$line")
((i++))
done <<< "$BACKUP_LIST"
read -rp "Backup number to restore (Enter = latest): " BACKUP_NUM
if [ -z "$BACKUP_NUM" ]; then
SELECTED_BACKUP=$(ssh "$DEST_USER@$DEST_HOST" "readlink -f '$DEST_BASE/$DOSSIER/latest'" || true)
if [ -z "$SELECTED_BACKUP" ]; then
echo "No 'latest' link found for this folder."
exit 1
fi
else
SELECTED_BACKUP="${BACKUP_ARRAY[$((BACKUP_NUM - 1))]}"
fi
echo "Selected backup: $(echo "$SELECTED_BACKUP" | sed "s|$DEST_BASE/||")"
# Timestamp for logs
START_DATE=$(date '+%Y-%m-%d %H:%M:%S')
BACKUP_TIMESTAMP=$(basename "$SELECTED_BACKUP")
# Choose between complete restoration or specific file
echo "What do you want to restore?"
select CHOIX in "Complete folder" "Specific file"; do
case $REPLY in
1)
RESTORE_PATH="$BASE_RESTORE_DIR/$DOSSIER"
echo "> Complete restoration to: $RESTORE_PATH"
mkdir -p "$RESTORE_PATH"
log_header "Complete folder"
rsync -av -e ssh "$DEST_USER@$DEST_HOST:$SELECTED_BACKUP/" "$RESTORE_PATH/" >> "$LOG_FILE" 2>&1
echo "Folder restored successfully."
break
;;
2)
echo "List of available files:"
FILE_LIST=$(ssh "$DEST_USER@$DEST_HOST" "cd '$SELECTED_BACKUP' && find . -type f" | sed 's|^\./||')
if [ -z "$FILE_LIST" ]; then
echo "No file found in backup."
exit 1
fi
FILE_ARRAY=()
i=1
while read -r file; do
echo " $i) $file"
FILE_ARRAY+=("$file")
((i++))
done <<< "$FILE_LIST"
read -rp "File number to restore: " FILE_NUM
FILE_TO_RESTORE="${FILE_ARRAY[$((FILE_NUM - 1))]}"
DEST_PATH="$BASE_RESTORE_DIR/$DOSSIER/$(dirname "$FILE_TO_RESTORE")"
mkdir -p "$DEST_PATH"
log_header "Specific file"
echo "> Restoring '$FILE_TO_RESTORE' to '$DEST_PATH'" >> "$LOG_FILE"
rsync -av -e ssh "$DEST_USER@$DEST_HOST:$SELECTED_BACKUP/$FILE_TO_RESTORE" "$DEST_PATH/" >> "$LOG_FILE" 2>&1
echo "File restored successfully."
break
;;
*)
echo "Invalid choice."
;;
esac
done
```
</details>
<details>
<summary>restore_dif.sh - Differential Restoration</summary>
```bash
#!/bin/bash
# Author: BENE Mael
# Version: 1.1
# Description: Manual differential backup restoration (VMs) with cumulative logging
set -euo pipefail
# Configuration
DOSSIER="MACHINES"
DEST_USER="backup-user"
DEST_HOST="stockage"
DEST_PATH="/home/$DEST_USER/backup/$DOSSIER"
RESTORE_DIR="$HOME/mairie/$DOSSIER"
LOG_FILE="$HOME/backup-logs/restores_dif.log"
mkdir -p "$HOME/backup-logs"
mkdir -p "$RESTORE_DIR"
START_DATE=$(date '+%Y-%m-%d %H:%M:%S')
{
echo "====================================================="
echo "[$START_DATE] > START DIFFERENTIAL RESTORATION"
echo "Restored folder: $DOSSIER"
echo "Local destination: $RESTORE_DIR"
echo "Remote source: $DEST_USER@$DEST_HOST:$DEST_PATH"
echo "====================================================="
} >> "$LOG_FILE"
# Restoration with rsync (differential)
rsync -av -e ssh "$DEST_USER@$DEST_HOST:$DEST_PATH/" "$RESTORE_DIR/" >> "$LOG_FILE" 2>&1
{
echo "[$(date '+%Y-%m-%d %H:%M:%S')] > END OF RESTORATION"
echo
} >> "$LOG_FILE"
```
</details>
### Cron Configuration
<details>
<summary>crontab - Backup Scheduling</summary>
```bash
# Differential backup of VM that forces stop after 3h (so at 4am)
0 1 * * * timeout 3h /home/oclassroom/backup_script/backup/differentielle.sh
# Daily backups with 7 days retention
0 4 * * * /home/oclassroom/backup_script/backup/incrementale.sh "FICHIERS" 7
0 5 * * * /home/oclassroom/backup_script/backup/incrementale.sh "MAILS" 7
0 6 * * * /home/oclassroom/backup_script/backup/incrementale.sh "RH" 7
30 6 * * * /home/oclassroom/backup_script/backup/incrementale.sh "TICKETS" 7
# SITE backup every 3 days at 7am, with 15 days retention
0 7 */3 * * /home/oclassroom/backup_script/backup/incrementale.sh "SITE" 15
```
</details>
### Execution Logs
<details>
<summary>sauvegardes_inc.log - Incremental Backup Logs</summary>
```log
=====================================================
[2025-08-12 12:00:00] > START INCREMENTAL BACKUP
Backed up folders: FICHIERS
Planned retention: 7 day(s)
Start timestamp: 2025-08-12_12-00-00
=====================================================
-----------------------------------------------------
[2025-08-12 12:00:00] > Processing folder: FICHIERS
[2025-08-12 12:00:00] > No recent FULL found -> BACKUP TYPE: FULL
sending incremental file list
./
doc1.txt
doc2.txt
fichier_2025-08-12_1.txt
fichier_2025-08-12_2.txt
sent 449 bytes received 95 bytes 1.088,00 bytes/sec
total size is 94 speedup is 0,17
[2025-08-12 12:00:01] > End of backup for FICHIERS
[2025-08-12 12:00:01] DAILY BACKUP COMPLETED
=====================================================
[2025-08-13 12:00:00] > START INCREMENTAL BACKUP
Backed up folders: FICHIERS
Planned retention: 7 day(s)
Start timestamp: 2025-08-13_12-00-00
=====================================================
-----------------------------------------------------
[2025-08-13 12:00:00] > Processing folder: FICHIERS
[2025-08-13 12:00:00] > Backup TYPE: INCREMENTAL (base: /home/backup-user/backup/FICHIERS/2025-08-12_12-00-00_FULL)
sending incremental file list
./
fichier_2025-08-13_1.txt
fichier_2025-08-13_2.txt
sent 361 bytes received 57 bytes 836,00 bytes/sec
total size is 154 speedup is 0,37
[2025-08-13 12:00:01] > End of backup for FICHIERS
[2025-08-13 12:00:01] DAILY BACKUP COMPLETED
=====================================================
[2025-08-20 12:00:00] > START INCREMENTAL BACKUP
Backed up folders: FICHIERS
Planned retention: 7 day(s)
Start timestamp: 2025-08-20_12-00-00
=====================================================
-----------------------------------------------------
[2025-08-20 12:00:00] > Processing folder: FICHIERS
[2025-08-20 12:00:00] > No recent FULL found -> BACKUP TYPE: FULL
sending incremental file list
[...]
[2025-08-20 12:00:01] > End of backup for FICHIERS
[2025-08-20 12:00:01] DAILY BACKUP COMPLETED
```
</details>
<details>
<summary>sauvegardes_dif.log - Differential Backup Logs</summary>
```log
=====================================================
[2025-08-12 17:26:10] > START DIFFERENTIAL BACKUP
Folder : MACHINES
Source : /home/oclassroom/mairie/MACHINES
Destination : backup-user@stockage:/home/backup-user/backup/MACHINES
Timestamp : 2025-08-12_17-26-10
=====================================================
[2025-08-12 17:26:10] > Checking remote folder...
[2025-08-12 17:26:10] > Remote folder ready.
[2025-08-12 17:26:10] > Launching rsync...
sending incremental file list
./
fichier_gros.test
rsync error: unexplained error (code 255) at rsync.c(716) [sender=3.2.7]
[2025-08-12 17:26:35] > Backup duration: 25 seconds
=====================================================
[2025-08-12 17:26:42] > START DIFFERENTIAL BACKUP
Folder : MACHINES
Source : /home/oclassroom/mairie/MACHINES
Destination : backup-user@stockage:/home/backup-user/backup/MACHINES
Timestamp : 2025-08-12_17-26-42
=====================================================
[2025-08-12 17:26:42] > Checking remote folder...
[2025-08-12 17:26:42] > Remote folder ready.
[2025-08-12 17:26:42] > Launching rsync...
sending incremental file list
./
fichier_gros.test
sent 668.597.769 bytes received 38 bytes 148.577.290,44 bytes/sec
total size is 5.368.709.120 speedup is 8,03
[2025-08-12 17:26:46] DIFFERENTIAL BACKUP COMPLETED
[2025-08-12 17:26:46] > Backup duration: 4 seconds
```
</details>
<details>
<summary>restores_inc.log - Incremental Restoration Logs</summary>
```log
=====================================================
[2025-08-12 17:23:56] > START INCREMENTAL RESTORATION
Restored folder: FICHIERS
Type: Specific file
Backup timestamp: 2025-08-25_12-00-00_INC
=====================================================
> Restoring 'doc1.txt' to '/home/oclassroom/mairie/FICHIERS/.'
receiving incremental file list
doc1.txt
sent 43 bytes received 139 bytes 121,33 bytes/sec
total size is 18 speedup is 0,10
=====================================================
[2025-08-12 17:24:13] > START INCREMENTAL RESTORATION
Restored folder: FICHIERS
Type: Complete folder
Backup timestamp: 2025-08-25_12-00-00_INC
=====================================================
receiving incremental file list
./
doc2.txt
fichier_2025-08-12_1.txt
[...]
fichier_2025-08-25_2.txt
sent 578 bytes received 2.750 bytes 6.656,00 bytes/sec
total size is 862 speedup is 0,26
```
</details>
<details>
<summary>restores_dif.log - Differential Restoration Logs</summary>
```log
=====================================================
[2025-08-12 17:29:42] > START DIFFERENTIAL RESTORATION
Restored folder: MACHINES
Local destination: /home/oclassroom/mairie/MACHINES
Remote source: backup-user@stockage:/home/backup-user/backup/MACHINES
=====================================================
receiving incremental file list
./
fichier_1Go.bin
fichier_gros.test
sent 65 bytes received 6.444.024.019 bytes 186.783.306,78 bytes/sec
total size is 6.442.450.944 speedup is 1,00
[2025-08-12 17:30:16] > END OF RESTORATION
```
</details>
## Skills Acquired
- Advanced Bash script development
- Mastery of rsync and its options
- Backup strategy design (3-2-1)
- Retention and rotation management
- Automation with cron
- Restoration procedure documentation

View file

@ -0,0 +1,51 @@
---
sidebar_position: 11
---
# P11 - ANSSI Compliance for Healthcare IS
## Context
Application of ANSSI (French National Cybersecurity Agency) recommendations for securing OpenPharma's information system: mapping, secure administration and evolution budget.
## Objectives
- Analyze and synthesize applicable ANSSI guidelines
- Produce the existing IS mapping
- Propose a compliant target architecture
- Establish a hardware and software budget
- Plan the compliance project
## Applied ANSSI Guidelines
- **Information System Mapping** (v1b, 2018)
- **Secure IS Administration** (v3.0)
## Proposed Technologies and Solutions
| Need | Solution | Justification |
|------|----------|---------------|
| Administration bastion | Teleport | Open source, built-in audit |
| SIEM | Wazuh | Detection, compliance, free |
| Firewall | FortiGate 60F | UTM, manufacturer support |
| Backup | Synology RS822+ | Rack NAS, snapshots, replication |
## Deliverables
<details>
<summary>View deliverables</summary>
- [IS Mapping](/assets/projets-oc/p11/BENE_Mael_1_cartographie_092025.pdf)
- [Project Plan](/assets/projets-oc/p11/BENE_Mael_2_plan_projet_092025.pdf)
- [User and Administrator Documentation](/assets/projets-oc/p11/BENE_Mael_3_documentation_092025.pdf)
</details>
## Skills Acquired
- ANSSI framework analysis and application
- Information system mapping
- Secure architecture design
- IT budget development
- Compliance project management
- Sector-specific constraints consideration (healthcare)

View file

@ -0,0 +1,80 @@
---
sidebar_position: 12
---
# P12 - Active Directory Security Audit
## Context
Offensive security audit of a clinic's Windows domain and Active Directory: penetration testing, vulnerability identification and remediation plan.
## Objectives
- Perform a complete AD security audit
- Identify exploitable vulnerabilities
- Demonstrate risks through proof of concepts
- Propose a corrective action plan aligned with ANSSI/NIST
## Methodology
1. **Reconnaissance**: domain enumeration
2. **Exploitation**: controlled penetration tests
3. **Post-exploitation**: privilege escalation
4. **Report**: vulnerabilities and remediations
## Tools Used
| Tool | Usage |
|------|-------|
| **nmap** | Network and service scanning |
| **enum4linux** | SMB/AD enumeration |
| **Kerberoasting** | Kerberos ticket extraction |
| **Mimikatz** | Credential extraction |
| **BloodHound** | AD attack path analysis |
## Identified Vulnerabilities (Examples)
| Vulnerability | Criticality | Risk |
|---------------|-------------|------|
| Accounts with SPN and weak password | Critical | Kerberoasting -> privileged access |
| NTLM enabled | High | Pass-the-Hash |
| Unconstrained delegation | High | Identity impersonation |
| Cleartext passwords (GPP) | Critical | Immediate compromise |
## Deliverables
<details>
<summary>Pentest Report (PDF)</summary>
Detailed document of penetration tests performed and identified vulnerabilities.
<iframe src="/assets/projets-oc/p12/BENE_Mael_1_rapport_pentest_102025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
<details>
<summary>Corrective Action Plan (PDF)</summary>
Remediation plan with action prioritization according to criticality level.
<iframe src="/assets/projets-oc/p12/BENE_Mael_2_plan_action_102025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
<details>
<summary>Presentation (PDF)</summary>
Presentation slides for stakeholder reporting.
<iframe src="/assets/projets-oc/p12/BENE_Mael_3_restitution_102025.pdf" width="100%" height="600px" style={{border: 'none'}}></iframe>
</details>
## Skills Acquired
- Security audit methodology
- Pentesting tools usage
- Active Directory vulnerability analysis
- Audit report writing
- Remediation plan development
- Results presentation to stakeholders

View file

@ -0,0 +1,58 @@
---
sidebar_position: 13
---
# P13 - Cloud Migration to AWS
## Context
Supporting Patronus company in its migration to AWS: technical architecture document, technology watch, planning and cost estimation.
## Objectives
- Conduct technology watch on Cloud services
- Produce a Technical Architecture Document (TAD)
- Compare on-premise, IaaS and PaaS models
- Establish a migration schedule (Gantt)
- Estimate human and financial costs
## Evaluated AWS Services
| Service | On-prem Equivalent | Usage |
|---------|-------------------|-------|
| **EC2** | Physical servers | Compute |
| **RDS** | MySQL/PostgreSQL | Managed database |
| **S3** | NAS/SAN | Object storage |
| **CloudFront** | CDN | Content distribution |
| **VPC** | Local network | Network isolation |
| **IAM** | Active Directory | Access management |
## Model Comparison
| Criteria | On-premise | IaaS (EC2) | PaaS (Elastic Beanstalk) |
|----------|------------|------------|--------------------------|
| Control | Total | High | Limited |
| Maintenance | Internal | Shared | AWS |
| Scalability | Limited | Good | Excellent |
| Initial cost | High | Low | Low |
| Recurring cost | Low | Variable | Variable |
## Deliverables
<details>
<summary>View deliverables</summary>
- [Technology Watch](/assets/projets-oc/p13/bene_mael__1_resultat-veille_112025.pdf)
- [Migration Plan](/assets/projets-oc/p13/bene_mael_2_migration_Patronus_112025.pdf)
- [Presentation](/assets/projets-oc/p13/bene_mael_3_diaporama_112025.pdf)
</details>
## Skills Acquired
- Structured technology watch
- Understanding of Cloud models (IaaS/PaaS/SaaS)
- Technical architecture document writing
- Project cost and effort estimation
- Migration planning (Gantt)
- Stakeholder communication (kickoff)