UMNAI’s approach to Neuro-symbolic AI

Neuro-symbolic AI is a class of Artificial Intelligence systems that combine the strengths of two different approaches to AI: neural networks and symbolic reasoning. Neuro-symbolic AI systems typically combine the capabilities of neural networks with the formalism and interpretability of symbolic reasoning.

UMNAI’s Hybrid Intelligence is based on a neuro-symbolic AI system, drawing on over two decades of practical AI deployment across various industries by the UMNAI team.

The power of combining two paradigms to create a third

Neural networks, including architectures such as UMNAI’s Explainable Neural Networks (XNNs), excel at identifying statistical regularities in high-dimensional data. Symbolic reasoning, by contrast, involves the manipulation of discrete representations, such as rules, concepts and logic structures to perform deductive, causal, or taxonomic reasoning.

Neuro-symbolic AI unites these two paradigms into a single architecture. The resulting systems offer both adaptability and structure; they can learn from data and simultaneously encode, manipulate and reason over symbolic knowledge.

A core advantage of such systems is their capacity for introspection. Users can inspect, edit and prioritise the information that has been learnt automatically by such systems. Users can also interpret and even intervene in the learned knowledge of the models, making the systems transparent, adaptable and trustworthy.

Although popular analogies from psychology, such as Daniel Kahneman’s System 1 and System 2, popularized in his 2011 book, “Thinking, Fast and Slow”, are sometimes invoked to characterise the division between fast, associative processing (neural, System 1) and slow, deliberative and explicit step-by-step reasoning (symbolic, System 2), such analogies are not strictly necessary to appreciate the functional distinction in AI systems. Neural components are used for pattern recognition and approximation tasks, symbolic components for explicit reasoning and structured representation. Neural networks and related techniques such as Deep Learning, handles the System 1 type of tasks best, while symbolic reasoning handles System 2 type tasks better.

Hybrid Intelligence: A Neurosymbolic Framework

Hybrid Intelligence is the first practical neuro-symbolic framework that has been developed for real world, large-scale, applications. Hybrid Intelligence provides a robust, flexible and reliable system that can learn, reason, collaborate and incorporate human feedback and expertise.

Hybrid Intelligence employs three core components:

  • Explainable Neural Networks (XNNs)

XNNs are transparent neural architectures structured to facilitate interaction with symbolic systems. Unlike traditional neural networks, they are modular, decomposable and interpretable, enabling symbolic references to be assigned to specific partitions and components.

  • Explanation Structure Models (ESMs)

ESMs define the structure and logic of explanations, specifying how results should be interpreted, summarised and communicated to different stakeholders. ESMs encode procedural, logical and representational aspects of explanation, allowing differentiated outputs for distinct user groups (e.g. technical users, end-users, auditors). ESMs are models of how an explanation should be structured, including any step-by-step reasoning goals, aggregations, data analytics and summarisation that should be applied to the raw results.

  • Neurosymbolic Hypergraph

A neuro-symbolic hypergraph is a structured knowledge representation that integrates learned neural patterns with symbolic relationships. This allows concepts, features and rules to be expressed as interconnected nodes and hyperedges. These are capable of supporting logical inference, abstraction and multi-modal reasoning within a unified framework.

 Together, these components provide the necessary infrastructure that allows Hybrid Intelligence systems to operate across the symbolic-statistical boundary, synthesising learned representations with human-derived rules, taxonomies and causal structures.

UMNAI’s Hybrid Intelligence integrates two primary knowledge sources:

  1. Knowledge learnt from data, using statistical techniques such as Deep Learning neural networks and Information Theory based data compression approaches. This knowledge is represented within the XNN infrastructure.

  2. Symbolic Knowledge automatically learnt and/or human-derived from cause-and-effect information, casual diagrams, logical rules, source code, procedures, workflows, domain ontologies and other symbolic information. These elements are encoded symbolically and referenced within ESMs and the broader symbolic infrastructure.

Within Hybrid Intelligence models, symbols are treated as unique identifiers to a real-world, physical, digital or abstract object. Symbols correspond to the notion of a concept that can be uniquely referenced, thus serving as uniquely named references to concrete or abstract entities. A symbol might refer to a physical object, a concept, a statistical partition, or a logical expression. The symbolic infrastructure ensures these references are persistent, identifiable and operable within logical, taxonomic or causal constructs.

At the neural network level, symbols are associated with a statistical pattern, or combination of patterns, which can be uniquely referenced. XNN partitions are examples of such unique statistical patterns that have their own symbolic references automatically assigned to them.

Symbolic structuring in ESMs and XNNs

Hybrid Intelligence models extend traditional machine learning pipelines by producing not only predictions but also structured explanations. These explanations are constructed using symbolic representations layered onto the neural outputs. Model outputs are augmented with explanation information and justification meta-data. Justification meta-data shows how the explanation itself was built up and what the inner workings of the model were, so that a precise, certain and complete understanding of how the model operates is achieved.

Feature level symbol assignment

Each input feature is assigned a symbol. Input can be of any input type that can be processed by a neural network. For example:

Age → Symbol: Age 

Gender → Symbol: Gender 

Income → Symbol: Income

→ Segmenter → Symbol: Right Lung

Feature Interactions and compound symbols

When two or more features interact together, they are represented within XNNs as interaction modules. XNN modules that represent a feature interaction have two or more symbols associated with them. The module itself can also have a compound symbol associated with it, in this case representing the interaction of multiple symbols.

For example, if there is an XNN module representing the interaction between [feature]Age and [feature]Income Level, that module can be named symbolically as “Age by Income Level” and defined as Age x Income Level. More formally this would be written as:

Age by Income Level = Age x Income Level

Categorical values

Categorical features and their values are explicitly symbolised. For example, a symbol is created for the “Gender” feature, as well as an associated symbol for its unique categorical values, such “Gender[Male]” and “Gender[Female]”.

This is typically denoted symbolically as:

Gender → [Male, Female]

Gender [Male], Gender[Female]

In this case, the “Gender” symbol represents a group (or set) of symbols. This type of relationship is known as a PART-WHOLE relationship, where the group is the WHOLE and the unique categorical values are the PARTS of the group (sometimes called “members” of that group).

Sub-feature interactions

Interactions can be specified at the level of specific values:

Age x Gender [Male]

Age x Gender [Female]

Once symbolic references are assigned, the full apparatus of symbolic reasoning becomes available: rules, logic expressions, taxonomies, object-oriented structures and hierarchies.

For example, a symbolic expression for identifying gender bias might be:

Gender Bias = (Age x Gender[Male]) – (Age x Gender[Female])

Rules can then be automatically or manually defined (or tweaked) based on top of these symbolic statements, for example:

IF Gender Bias > Threshold THEN Gender Bias Flag = TRUE

Such rules can be generated automatically, refined manually or updated iteratively.

Hypergraphs: A foundation for reasoning

The integration of symbolic and statistical knowledge is formalised within a hypergraph structure. Hypergraphs are a general purpose, powerful version of the graphs that are commonly used in large-scale AI systems. Unlike conventional graphs, hypergraphs allow relations between multiple nodes or sets (groups of nodes), enabling expressive representation of group-wise, compositional and hierarchical structures.

In Hybrid Intelligence, the hypergraph serves as the core knowledge representation layer. It supports:

  • Symbolic embedding of learned neural patterns

  • Encoding of group relationships and taxonomic structures

  • Logical inference and rule evaluation

  • Contextual filtering, summarisation and ranking

  • Seamless fusion of vector embeddings and knowledge graph structures

The hypergraph also stores transformation metadata, which governs how information is aggregated, composed or filtered for explanation and reasoning purposes. ESMs use hypergraphs to:

  • Summarise and filter information selectively

  • Group related information

  • Rank objects

Key concepts and terminology

The following table defines the different key concepts within Umnai’s neuro-symbolic Hybrid Intelligence and relates them to other familiar systems:

Neuro-symbolic Hybrid Intelligence
Description
Related Concepts
Named Reference
Unique label representing an object, feature, interaction or model component.
Feature names (datasets), variable labels (code), field names (databases)
Symbolic Kernel
Statistical distribution segments with symbolic labels. Often associated with interpretable partitions or kernel functions.
One of the foundational building blocks of symbolic representation. SVM kernels and CNN filter functions are related concepts.
Symbolic Variable
An identifiable object with a named reference that can participate in relationships or groupings. Often in groups or sets within a hierarchy.
Analogous to the concept of a variable or object. Variables are units of data storage and can be assigned some value that can be read back.
Symbolic Concept

Symbolic concepts represent the description that defines a class or group of symbols, not the group itself, while not being instances themselves.

For example, there may be variables representing various loans, but the concept of a “Loan,” which describes the characteristics of a loan, is not a loan itself.

Concepts can also have relationships with each other and can also be grouped together into groups or sets, like symbolic variables.
In software development, this is loosely corresponding to a Class. Segments or groups in data analytics are also similar.
Symbolic concepts correspond to places or nodes within a taxonomy.
Non-Symbolic Variable
Variables with no associated semantics—i.e. lacking named references or contextual meaning. Common in standard deep learning models.
For example, “Occupation x Age” has meaning and can be a named symbolic variable, while “Weight 12334” does not. Deep Learning models are typically implemented using non-symbolic variables.
Raw vectors, unnamed tensors.

 Data types, semantics and human-centric design

Symbolic systems incorporate native representations of data modalities, recognising the values of data objects and also their types, structures and semantic roles. This contrasts with neural networks, which typically treat all inputs as undifferentiated numerical tensors, lacking inherent awareness of data type or modality.

In symbolic systems, a named reference corresponds not just to a value, but to an object whose structure reflects its modality. For example:

  • A scalar is modelled as a single numeric value.

  • An array corresponds to a one- or two-dimensional vector or matrix.

  • A colour image is typically represented as three two dimensional arrays of integers, one for each red, green and blue channel.

  • A 3D structure, such as a point cloud, is encoded as a three-dimensional depth map capturing spatial relationships in the physical world.

Hybrid Intelligence incorporates explicit support for data types, units and dimensions, much like modern programming languages. This facilitates structured reasoning over diverse objects and enables the definition of goals and objectives that span multiple modalities. By embedding modality-aware representations into its symbolic and learning components, Hybrid Intelligence supports richer, more semantically coherent reasoning and decision-making across complex, mixed-modal tasks.

Conclusion: closer to human reasoning

Neuro-symbolic AI, as implemented in Hybrid Intelligence, represents a substantive advancement in AI system design. By integrating explainable neural learning with structured symbolic reasoning, it provides a framework that is both adaptive and interpretable. The system supports introspection, human collaboration, symbolic rule enforcement and structured explanation. This enables AI systems that are not only performant but also traceable, editable and semantically coherent.

Part of the intuitive appeal of this approach lies in its alignment with how humans reason, explain and represent knowledge. Hybrid Intelligence systems reflect key characteristics of human cognition including compositionality, symbolic abstraction and explanatory structure—bringing them one step closer to the way people think and communicate. This is in contrast to purely statistical or opaque machine learning models.

By bridging data-driven learning with structured reasoning, Hybrid Intelligence sets the stage for AI systems that are not only more intelligent, but also more accountable, adaptable and aligned with real-world understanding. It moves beyond prediction to comprehension and in doing so, points towards a more usable and responsible future for artificial intelligence. 

Next
Next

Hybrid Agentic Computing