< Back to previous page

Project

Artificial neural networks for sequential thinking

Despite impressive pattern recognition successes, today's deep neural networks fundamentally fall short compared to biological brains: they do not possess the ability to engage in explicit sequences of understandable logical thoughts. That is a key property of symbolic AI models, but these in turn lack the strong pattern recognition capabilities of neural networks. The neural networks of the future need to possess both properties. Ultimately, humans will want to be able to understand and steer such models' artificial thoughts, in particular through natural language. Yet, there is a long way ahead before this becomes possible. This project’s goal is to design a new class of artificial neural network models, that learn to make predictions by conducting a series of logical artificial thoughts, which are human-understandable and explicitly trainable. For designing these models, we will capitalize on recent progress in the energy-based training of feedback neural networks. As a middle ground between exact symbolic representations intended for reasoning, and fully distributed neural representations, we will investigate the creation of locally distributed representations. The underlying idea is inducing a mapping between human-understandable concepts and local clusters of neural activity in the model. Activation trajectories over these meaningful areas within the neural network can then be probed, and interpreted as the model's artificial thoughts.

Date:1 Jan 2023 →  Today
Keywords:Energy-based neural networks training, Feedback neural networks, Neuro-symbolic AI
Disciplines:Knowledge representation and reasoning, Natural language processing, Neural, evolutionary and fuzzy computation