Research Interests
I am internationally recognized as one of the experts on computational complexity aspects of Bayesian networks, which was also the topic of my PhD research. After my PhD period I specialized in the complexity of approximating the most probable explanation for observed variables in such networks. I proposed a new heuristic, so-called Most Frugal Explanation, for this computationally intractable problem. In addition, I proposed new parameterized complexity classes that capture the notion of efficient randomized computations. In contrast to the traditional parameterized complexity class FPT, these new classes parameterize the probability of answering correctly, rather than computation time.

In recent years I became interested in (resource-bounded) probabilistic inferences in the brain, in particular in the context of the "Predictive Processing" account in neuroscience. Together with researchers at the Donders Institute for Brain, Cognition, and Behaviour, I contributed computational models and mathematical formalizations to this account. In particular, I identified the level of detail of predictions and the trade-off between making precise predictions and predictions that are likely to be correct as crucial aspects of the theory. I also contributed computational models of various mechanisms of prediction error minimization and studied the computational complexity of sub-processes in the Predictive Processing account.

My current research focuses both on more conceptual aspects of Predictive Processing, such as the question how the generative models are developed in infancy, as well as computational aspects, such as in investigating how the proposed algorithms for these computations can be rendered tractable. I am also more generally interested in resource-bounded approximate computations in the brain, in particular in computational (in-)tractability relative to the brain's resources, such as in neuromorphic hardware. I use a variety of research methods, ranging from computational formal modeling and mathematical analysis to (developmental) robotics.

NALCA project (2018-2021; master internships and research projects in collaboration with Intel)
In traditional (Von Neumann) computer architectures computation and memory are physically separated by a (relatively slow) data bus, requiring excessive data transfer with every instruction cycle and high energy consumption as a consequence. This has accelerated the development of so-called neuromorphic architectures that, inspired by the structure of the brain, co-locate computation and memory in artificial (spiking) neural networks. This allows for a new way of thinking about algorithms and data and designing energy-efficient algorithms that encode information in time-dependent spiking behaviour. Theory, however, is now seriously lagging behind hardware innovations. We do not yet fully understand and utilize the potential and limitations of these new architectures. In this research project we lay the foundations for an urgently needed computational complexity framework for neuromorphic hardware focusing on energy as a vital resource. Foundational complexith-theoretical work is augmented by a parallel project, supported by Intel, where we translate abstract algorithms on generic computing devices to actual implementations on Intel’s new Loihi architecture, allowing us to validate and refine the theoretical models. Theoretical and practical research together will lay a firm foundation for the further development of energy-efficient computing to help reducing carbon footprints..

This project is funded by an Intel Neuromorphic Research Community grant awarded to Johan Kwisthout (applicant).

PP-Dev project (2017-2021; PhD candidate: Danaja Rutar; co-supervision with Sabine Hunnius)
The Predictive Processing account in neuroscience is becoming increasingly popular as a unifying explanation of the brain's modus operandi. Despite its empirical and theoretical successes, the Predictive Processing account is still lacking one key ingredient: A computational explanation of how the generative models (that allow for making predictions) are formed, shaped, and adapted to reflect new experiences in the world. What is studied in Predictive Processing is typically the adult brain, not the brain under development in infancy. How prediction errors shape the generative models that are assumed in this account is currently a major gap in our knowledge.

In this project we will address this 'developability' issue and provide insights in the computational mechanisms that allow for learning and development in Predictive Processing, using a combination of computational formal modeling, behavioral infant research, and developmental robotics. We will study when and how generative models are updated, revised, and refined; how these distinct mechanisms interact in development, and what mechanistic explanations can account for individual differences in infant learning strategies. This 'enrichment' of Predictive Processing will pave the way for a solid theoretical framework for infant research and will provide valuable insights in how learning might be facilitated.

This project is funded by an internal DCC PhD-grant awarded to Johan Kwisthout and Sabine Hunnius (joint applicants).

p-CABI project (2017-2021; PhD candidate: Nils Donselaar)
Computing posterior and marginal probabilities constitutes the backbone of almost all inferences in Bayesian networks. These computations are known to be intractable in general; moreover, it is known that approximating these computations is also NP-hard. These theoretical results are in stark contrast with common experience in practical randomized approximation algorithms that often -but not always!- are able to approximate large-scale inferences. Our understanding of when, why, and how randomized approximate inference can be tractable in Bayesian networks is currently still limited.

This is even more problematic as this hinders progress in other scientific disciplines, in particular cognitive science and neuroscience. The influential `Predictive Processing' account of cortical processes in neuroscience is explicitly built on the formal machinery of approximate Bayesian computations. There is an urgent need for a deeper theoretical understanding of these computations and the constraints that need to be imposed on their inputs such as to ensure tractability - a crucial prerequisite for neuronal implementation. However, recent advances in the area of parameterized complexity theory, where recently a randomized analogue of fixed-parameter tractability has been proposed, allow us to now tackle these questions.

In this project our goal is to thoroughly study these computations from different perspectives: studying structural properties of fixed-error randomized tractability ("Fundamentals"); designing and analysing efficient and predictable approximation algorithms that allow to trade off sampling and marginalizing in inference ("Engineering"); and studying the neuroscientific claim that predicting inputs based on generative models allows for efficient approximate computations by the brain ("Reverse Engineer").

This project is funded by an NWO exact sciences TOP grant awarded to Johan Kwisthout (612.001.601).
See also this article on the Radboud website.

Turing's razor project (2013-2017; PhD candidate: Maria Otworowska; co-supervision with Iris van Rooij)
According to a prominent hypothesis in cognitive neuroscience, known as the Bayesian Brain hypothesis, the brain's cognitive prowess can be explained by assuming that it can perform a variety of probabilistic (a.k.a. Bayesian) inference. The Bayesian brain hypothesis seems descriptively powerful enough to characterize the cognitive competence of the human brain. Yet, it raises a theoretical challenge for explaining its computational efficiency. The reason is that, in current theory, even only approximately computing Bayesian inferences can take years or centuries for the non-trivial situations that humans effortlessly deal with in their everyday life. In contrast, cognitive brain processing occurs in practice on a timescale of seconds or milliseconds. How can this paradox between theory and practice be resolved?

In this project, we set out to answer this question by developing a new theory that can explain how a brain can efficiently approximate Bayesian inference for a wide range of situations. Using an innovative approach that combines formal modeling with complexity analysis, we will identify parameters of a computational architecture that can make a Bayesian brain computationally efficient. The newly developed theory will be an important contribution to cognitive neuroscience as it will (a) deepen our understanding of the brain's remarkable cognitive inferential abilities as well as its intrinsic computational limitations; (b) serve to bring more computational rigor to philosophical discussions about the viability of Bayesian, optimal and rational explanations; and (c) yield novel predictions about the brain's architecture and cognitive performance that can guide future empirical research in cognitive neuroscience.

This project was funded by an internal DCC PhD-grant awarded to Iris van Rooij (first applicant) and Johan Kwisthout (co-applicant).