Differentiable Probabilistic Soft Logic bridging Perception and Reasoning.
Soft truth values \(\mathbf{p} \in [0, 1]^k\)
Inferred atoms \(\mathbf{y} = \arg\min E(\mathbf{y})\)
Mapping symbolic rules to Hinge-Loss Markov Random Fields (HL-MRFs).
The input features \(\mathbf{x}\) are mapped to a set of observed atoms \(\mathbf{p}\) via a parameterized neural network \(f_\theta\).
Where \(\theta\) are learned weights, typically optimized end-to-end through the logic solver.
Logical rules are relaxed into continuous differentiable constraints. An implication \(A \rightarrow B\) is represented as the linear inequality:
The violation \(\phi_i\) is then the squared distance from the satisfied manifold.
DeepPSL finds the Maximum A Posteriori (MAP) state of the joint distribution by solving a constrained Quadratic Program.