Projects
Three missions; each includes focused Operations with telemetry.
Mission I — Airbus
Data Engineer • Jan 2023 – Jun 2024 • Toulouse
Minimise deployment time for analytics and move high-throughput data reliably for the OPTIMATE autonomous-taxiing programme.
A1OPTIMATE Streams
Feed autonomous taxiing with reliable, low-latency streams under high load.
- Implemented C++ stream components sustaining ~2 GB/s under test.
 - Designed predictable backpressure + bounded queues.
 - Added scriptable replay paths; wrote integration notes for downstream teams.
 
Ad-hoc ingestion, uncertain behavior under load
Single, documented ingestion path with known limits
A2Dashboard Deployment Acceleration
Reduce time-to-deploy dashboards by eliminating manual ETL steps.
- Built Python ETL automation and standardized data contracts.
 - Packaged deployment scripts for consistent releases across teams.
 
Snowflake releases, regional variation
Reproducible, scripted deployments
A3HR Data Unification
Improve HR data accuracy across 5 regions for downstream dashboards.
- Integrated datasets; reconciled schemas; added idempotent transforms.
 - Surfaced diffs for quick fixes; documented schema versions and ownership.
 
Fragmented sources, inconsistent KPIs
Unified feeds with clearer ownership
Mission II — Green Praxis
Cloud & Data Engineer • Jan 2025 – Present • Aix-en-Provence
Build reliable data infrastructure and internal tooling for environmental & geospatial products.
G1Geo Pipelines & Dynamic Tiles
Automate multi-band satellite imagery ingest and serve dynamic, reliable map tiles.
- Designed Airflow DAGs for ingest/transform; versioned data contracts.
 - Shifted serving to dynamic Google Earth Engine backend; added guardrail checks.
 
Manual/semi-manual tiling, drift between runs
Airflow-managed processing + dynamic serving
G2Observability Rollout
Provide end-to-end visibility across DAGs, APIs, and the Kubernetes cluster.
- Deployed Prometheus + Grafana; standardized labels/owners; pruned noisy alerts.
 - Integrated Airflow task metrics and API latency/error budgets into a single view.
 
Fragmented monitoring, key failure modes missed
Unified dashboards; actionable alerts
G3Internal Tooling & Platform Hygiene
Reduce developer toil; make pipelines easier to evolve.
- Templated DAG patterns; small CLI helpers for common ops.
 - Tightened Terraform modules for repeatable infra.
 
Ad-hoc scripts
Reusable templates & helpers
Mission III — Murex Systems
Project Manager • May 2021 – Jan 2022 • Beirut
Deliver a log-analysis demonstrator for a high-frequency trading platform, shipping an MVP on time with a small team.
M1Log-Analysis PoC (HFT)
Detect unseen error patterns quickly to accelerate root-cause analysis.
- Led a 3-developer Scrum team; Python pipeline to parse/cluster anomalies.
 - Iterated with users for actionable output.
 
Long hunts over raw logs
Ranked signals with clear context
M2MVP Delivery in K8s
Package and deploy the MVP in a modern, reproducible environment.
- Containerised services (Docker); orchestrated with Kubernetes; CI on Jenkins.
 - Wrote a pragmatic runbook.
 
Snowflake environments
Standardised containers + CI
