Hybrid Intelligence Platform

Use Hybrid Intelligence through a single, unified platform that exposes the full neuro-symbolic development workflow via robust APIs. The platform provides all the tools needed to train, optimise, inspect, deploy, and inference explainable, adaptive, intelligent systems.

Build predictive models that combine structured reasoning with adaptive learning in a cohesive environment. The result is systems that are accurate, transparent, auditable, and aligned with domain-specific logic.

Data Workflow

Connect Data Sources

The platform automatically handles analysis, normalisation, and encoding during onboarding and training.

Once validated, your data is ready to use for model training.

  • Connectors at Beta: AWS S3

    Coming Soon: Snowflake, BigQuery, Azure Blob Storage, REST APIs, and JDBC-compatible sources.

    Formats at Beta: CSV, Parquet

    Coming Soon: ??

    Support at Beta: Structured Tabular Data

    Coming Soon: Vector Embedding, Images, ??

    Read More:

    Data support relevant Framework Docs

    API Docs

Data Annotation and Enhancement

Add structure and context to your data through targeted annotations.

Use tagging, groupings, constraints, and feature metadata to embed human knowledge directly into the modelling process.

Enhancements improve explainability, guide model structure, and enable reasoning aligned with domain logic.

This step supports both manual input and automated suggestions from the platform.

  • Friendly Names: Add human-readable labels at varying levels of verbosity. These labels are carried through to model logic, insights, and explanation layers.

    Units and Dimensions: Define units (e.g. days, dollars) and dimensions (e.g. time, volume) to improve interpretability and support dimensional reasoning.

    Protected Features: Flag attributes such as age, gender, ethnicity, or other legally or ethically sensitive variables. The platform uses this tagging to enable fairness checks, bias mitigation strategies, and compliance with regulatory frameworks (e.g. GDPR, EEOC, EU AI Act).

    Feature Groups: Organise features into logical groupings to support structured modelling and explanation across related variables.

    Feature Value Groups: Cluster specific values within a feature (e.g. product categories, risk tiers) to enable abstraction and simplify reasoning. API Guide >>

    High Cardinality Treatments: Apply compression, grouping, or encoding strategies to manage features with large, sparse value sets.

    Missing Value Treatments: Set rules or preferences for imputing, excluding, or flagging missing values during training and inference.

    Value Types and Roles: Specify whether features represent identifiers, flags, continuous measures, temporal data, or reference variables. This helps the platform apply appropriate transformations and reasoning patterns.

    Causal Tags: Annotate features with causal roles (e.g. predictor, mediator, outcome) to support causal inference and scenario simulation.

    Data Provenance: Track the source and lineage of each feature to support traceability and compliance.

    Policy Constraints: Define limits, thresholds, or guardrails that reflect business rules or regulatory requirements. These can be carried into model logic and validation.

    Temporal Relevance: Mark features with time sensitivity or decay characteristics (e.g. last updated, expiry) to support behavioural reasoning and time-based partitioning.

    Default Treatments: Set default behaviour for unseen values, rare categories, or ambiguous cases during inference.

    Semantic Types / Ontology Links: Map features to standard taxonomies or domain ontologies to support semantic consistency and integration across models.

Onboard Dataset - TBD

The platform automatically handles analysis, normalisation, and encoding during onboarding and training.

Once validated, your data is ready to use for model training.

  • Connectors at Beta: AWS S3

    Coming Soon: Snowflake, BigQuery, Azure Blob Storage, REST APIs, and JDBC-compatible sources.

    Formats at Beta: CSV, Parquet

    Coming Soon: ??

    Support at Beta: Structured Tabular Data

    Coming Soon: Vector Embedding, Images, ??

    Read More:

    Data support relevant Framework Docs

    API Docs

Training and Evaluation Workflow

TBD

Add structure and context to your data through targeted annotations.

Use tagging, groupings, constraints, and feature metadata to embed human knowledge directly into the modelling process.

Enhancements improve explainability, guide model structure, and enable reasoning aligned with domain logic.

This step supports both manual input and automated suggestions from the platform.

  • Friendly Names: Add human-readable labels at varying levels of verbosity. These labels are carried through to model logic, insights, and explanation layers.

    Units and Dimensions: Define units (e.g. days, dollars) and dimensions (e.g. time, volume) to improve interpretability and support dimensional reasoning.

    Protected Features: Flag attributes such as age, gender, ethnicity, or other legally or ethically sensitive variables. The platform uses this tagging to enable fairness checks, bias mitigation strategies, and compliance with regulatory frameworks (e.g. GDPR, EEOC, EU AI Act).

    Feature Groups: Organise features into logical groupings to support structured modelling and explanation across related variables.

    Feature Value Groups: Cluster specific values within a feature (e.g. product categories, risk tiers) to enable abstraction and simplify reasoning. API Guide >>

    High Cardinality Treatments: Apply compression, grouping, or encoding strategies to manage features with large, sparse value sets.

    Missing Value Treatments: Set rules or preferences for imputing, excluding, or flagging missing values during training and inference.

    Value Types and Roles: Specify whether features represent identifiers, flags, continuous measures, temporal data, or reference variables. This helps the platform apply appropriate transformations and reasoning patterns.

    Causal Tags: Annotate features with causal roles (e.g. predictor, mediator, outcome) to support causal inference and scenario simulation.

    Data Provenance: Track the source and lineage of each feature to support traceability and compliance.

    Policy Constraints: Define limits, thresholds, or guardrails that reflect business rules or regulatory requirements. These can be carried into model logic and validation.

    Temporal Relevance: Mark features with time sensitivity or decay characteristics (e.g. last updated, expiry) to support behavioural reasoning and time-based partitioning.

    Default Treatments: Set default behaviour for unseen values, rare categories, or ambiguous cases during inference.

    Semantic Types / Ontology Links: Map features to standard taxonomies or domain ontologies to support semantic consistency and integration across models.

Onboard Dataset - TBD

The platform automatically handles analysis, normalisation, and encoding during onboarding and training.

Once validated, your data is ready to use for model training.

  • Connectors at Beta: AWS S3

    Coming Soon: Snowflake, BigQuery, Azure Blob Storage, REST APIs, and JDBC-compatible sources.

    Formats at Beta: CSV, Parquet

    Coming Soon: ??

    Support at Beta: Structured Tabular Data

    Coming Soon: Vector Embedding, Images, ??

    Read More:

    Data support relevant Framework Docs

    API Docs

Inference Workflow

Retraining and Optimisation Workflow