We develop autonomous, interpretable AI systems for space missions — from onboard inference on resource-constrained CubeSats to formal verification of mission-critical decision-making. Pre-deployment validation, not post-incident response.
Our methodology centers on interpretable AI validated before launch. We engineer systems that explain their decision pathways — not after something goes wrong, but before it ever could.
From ML model architecture through deployment pipelines — covering onboard inference, satellite health monitoring, and Earth observation analytics across hardware constraints and radiation-hardened computing.
Formal verification for AI in life-critical systems. Mars rover decision-making, satellite collision avoidance — mathematical proofs of behavior bounds, not just security hardening.
XAI in space isn't academic — when a satellite autonomously changes orbit, operators need to understand why. We're building the language for human-AI communication in space operations.
Our QML division explores quantum algorithms for remote sensing and optimization in orbital mechanics — at the intersection of theoretical computer science and practical space engineering.
Real hardware, real launches, real data. Debugging edge computing on 10cm³ satellites, optimizing power budgets for neural networks, handling intermittent ground station contacts.
Reliability. Explainability. Accountability. We are not building AI that impresses — we are building AI that you can trust when the margin for error is zero.
We don't build AI to replace humans in space exploration. We build AI to help them operate in the most vulnerable environment humanity has ever attempted — where there is no margin, no fallback, and no second chance.
A modular AI-driven framework integrating failure prediction, anomaly detection, and a decision support system for CubeSat platforms (1U–6U). Combines XGBoost, Time-Series Transformers, LSTM Autoencoders, and a PPO reinforcement learning agent with SHAP/LIME explainability throughout. Achieves 89.5–94.1% accuracy across platform classes. Aligned with UNOOSA and COSPAR guidelines.
An ongoing initiative developing a digital twin framework for precision agriculture, leveraging satellite remote sensing and AI-based environmental modelling to support real-time crop monitoring, resource optimization, and climate-resilient farming decisions.
Ongoing research into fixed-wing UAV autonomous systems, focusing on onboard AI decision-making, flight control, and mission planning under resource constraints — extending our core expertise in edge AI inference to aerial platforms.
Each track operates independently with a dedicated lead, contributing to our shared research output. Tap any track to see its members.
We're a distributed team across 15+ countries. Whether your background is ML, aerospace, policy, or you just build things — there's likely a place here.
Choose how you want to engage:
Reviewed by the PG Lead. We'll follow up via Discord or email.
Project proposals are reviewed by the leadership board.
Reviewed by the Secretariat Lead. We'll follow up via Discord or email.
Collaboration inquiries go directly to the PG Lead and Co-Lead.
Your next step is below.