Neuro-Symbolic AI Framework

DeepPSL

Differentiable Probabilistic Soft Logic bridging Perception and Reasoning.

Scenarios

First-Order Logic Rules

\(\forall x, \text{Bird}(x) \rightarrow \text{Wings}(x)\)
\(\forall x, \text{Mammal}(x) \rightarrow \text{Fur}(x)\)
\(\forall x, \text{Fish}(x) \rightarrow \text{Aquatic}(x)\)

1. Neural Perception

Soft truth values \(\mathbf{p} \in [0, 1]^k\)

\(\mathbf{p} = \sigma(\text{NN}_\theta(\mathbf{x}))\)

2. PSL Reasoning

Inferred atoms \(\mathbf{y} = \arg\min E(\mathbf{y})\)

\(\phi(\mathbf{y}, \mathbf{p}) = \max(0, \ell(\mathbf{y}, \mathbf{p}))^2\)

Mathematical Foundation

Mapping symbolic rules to Hinge-Loss Markov Random Fields (HL-MRFs).

1 Neural Grounding

The input features \(\mathbf{x}\) are mapped to a set of observed atoms \(\mathbf{p}\) via a parameterized neural network \(f_\theta\).

$$ \mathbf{p} = \sigma(\text{NN}_\theta(\mathbf{x})) $$

Where \(\theta\) are learned weights, typically optimized end-to-end through the logic solver.

2 Lukasiewicz Relaxation

Logical rules are relaxed into continuous differentiable constraints. An implication \(A \rightarrow B\) is represented as the linear inequality:

$$ \ell_i(\mathbf{y}, \mathbf{p}) = y_{\text{head}} - p_{\text{body}} + 1 \ge 1 $$

The violation \(\phi_i\) is then the squared distance from the satisfied manifold.

Global MAP Inference

DeepPSL finds the Maximum A Posteriori (MAP) state of the joint distribution by solving a constrained Quadratic Program.

$$ \mathbf{y}^* = \arg\min_{\mathbf{y} \in [0,1]^n} \sum_{i=1}^m w_i \max(0, A_y^{(i)} \mathbf{y} + A_p^{(i)} \mathbf{p} + b^{(i)})^2 + \frac{\lambda}{2} \|\mathbf{y}\|^2 $$
Weighted Violations
Linear Grounding
Regularization