Inside Hybrid Intelligence
Explore the architecture, reasoning, and real-world implementation of Hybrid Intelligence.
We unpack the theory, models and methods behind the technology, offering a rigorous look at how it works and how transparency, auditability, and human-aligned reasoning are engineered into every decision.
The Framework
Introduction to the core principles of Hybrid Intelligence and Neuro-Symbolic AI
An overview of neuro-symbolic AI and how it combines machine learning with symbolic reasoning and hypergraphs for interpretable models. Read >>
A summary of the key advantages Hybrid Intelligence delivers, including transparency, auditability, and data efficiency. Read >>
What is Neurosymbolic AI?
Hybrid Intelligence Benefits
The Architecture
The architecture and main components of Hybrid Intelligence
Explainable Neural Networks (XNNs)
Explanation Structure Models (ESMs)
What is an Explanation Structure Model
An in-depth look at Explanation Structure Models (ESMs) and how they organize, package, and deliver structured model explanations. Read >>
What is an Explainable Neural Network
An introduction to Explainable Neural Networks (XNNs), how they work and how they generate transparent, rule-based predictions. Read >>
A detailed look at the core design principles, modular structure and reasoning capabilities of Explainable Neural Networks (XNNs). Read >>
XNN Fundamentals
An in-depth breakdown of the internal architecture of XNNs, including modules, partitions, rules and symbolic graphs. Read >>
XNN Architecture
A focused look at how XNN modules are constructed and how they encode symbolic reasoning within the model. Read >>
XNN Modules
An exploration of how human rules and domain knowledge can be injected into XNNs to guide model behaviour and improve generalisation. Read >>
XNNs: Human Knowledge Injection
The Training Method
How Hybrid Intelligence models are built, trained, and maintained using symbolic induction, efficient retraining, and interpretable data compression.
An explanation of the Hybrid Intelligence induction process and how it generates symbolic, interpretable models from data and knowledge. Read >>
A guide to how Hybrid Intelligence models are trained, monitored, and retrained to stay aligned with evolving data and business needs. Read >>
Hybrid Intelligence Induction
Training and Retraining
A deep dive into how XNNs use explainable information bottlenecks to compress data without losing interpretability. Read >>
Information Bottlenecks
The Model Outputs
How XNNs produce and present their outputs, including predictions, detailed explanations, and audit mechanisms.
Characteristics of Good Explanations
An outline of the key qualities that make model explanations clear, trustworthy, and useful for different stakeholders. Read >>
A classification of explanation types used in Hybrid Intelligence, from how and why to what-if and contrastive reasoning. Read >>
Different Types of Explanations
An overview of how XNNs generate predictions across different problem types, with built-in explanations for each output. Read >>
XNN Predictions
An explanation of how XNNs compute attributions to show which inputs influenced each prediction and by how much. Learn more >>
XNN Attributions
A guide to interpreting XNN outputs through feature attributions, rule activations, and layered explanation views for deeper model insight. Read >>
Interpreting XNN Results
An overview of how XNNs handle queries and return structured, traceable explanations alongside predictions. Read >>
XNN Querying and Explanations
An explanation of how XNNs support auditability using explanations, unique verification codes and traceable model outputs. Read >>