Thesis projects

I welcome students that want to do a thesis project with me as internal supervisor. Below you can find a number of tentative master projects that are closely related to the work in my research group and that can be tailored to your own interests. They range from conceptual to formal and from programming to experiments; some projects are suitable for students in Cognitive Neuroscience or Computer Science as well as Artificial Intelligence. Feel free to contact me if you want to brainstorm.

Open master projects

Neuromorphic Algorithm Design(AI or CS project)

Neuromorphic chips, such as Intel's new Loihi chip, are not based on the traditional Von Neumann-architecture but represent both data and algorithm in spiking behavior of spiking neural networks. These new architectures allow for a totally different view on how to process information, e.g., encode data in temporal differences between spikes or in synaptic delays. We have remote access to Intel's Loihi chip to investigate the runtime behavior of such algorithms on this neuromorphic architecture. Keywords: Neuromorphic computing, algorithm design and analysis, complexity. Extensions towards hardware design (e.g. FPGA) possible. Prerequisite course: Neuromorphic Computing.

Neuromorphic Models of Computation (CS or possibly AI project)

In our recent work (Kwisthout and Donselaar, 2000) we have laid some first foundations for a computational complexity framework for neuromorphic architectures. Many open questions remain; for example, it has been claimed in the literature that neuromorphic architectures are Turing-complete; this appears to depend on assumptions with respect to infinite spike delays or access to internal neuron states. In this project we would like to study such claims and in general find out to what extent variations on the formal machine model determinates them. Keywords: theoretical computer science, Neuromorphic computing, computability and complexity. Prerequisite course: Neuromorphic Computing. A background in computability / complexity theory is highly desired.

Neuronally plausible implementation of level-of-detail modulation(AI or CNS project)

Recent conceptual studies of the predictive processing framework distinguished the precision (uncertainty) and the level-of-detail (granularity) of generative models and predictions (Kwisthout et al., 2017). It has been previously proposed that dopamine plays an important role in precision-weighting of prediction errors (Friston et al., 2012). Insights from psychedelics literature suggests that serotonin might play a role in the modulation of level-of-detail of predictions. We postulated (Haskes et al., 2017) that psychedelics (being partial serotonin agonists) lead to overly detailed predictions, i.e., breaking up of established categories of prediction, by de-synchronizing ensembles of neurons. In this project we want to further investigate this idea and establish a neuronally plausible explanation of how level-of-detail modulation is implemented in the brain. Keywords: predictive processing, computational neuroscience, conceptual analysis and computer simulation.

Cognitive aspects of Most Frugal Explanations(AI or Psy project)

Most Frugal Explanation (MFE) is a heuristic approach to the computationally intractable Most Probable Explanation problem in Bayesian networks. In this project we want to compare this theory with cognitive theories on stereotyping, exemplars, and other cognitive approaches towards efficient decision making. In particular we want to study whether MFE can act as the underlying computational framework that supports cognitive theories based on such heuristics. Keywords: cognitive modeling, Bayesian networks, conceptual and computational analysis. Extensions towards experimentation, complexity analysis, etc. possible.

Benchmark experiments on Most Frugal Explanations(AI or CS project)

Most Frugal Explanation (MFE) is a heuristic approach to the computationally intractable Most Probable Explanation problem in Bayesian networks. In this project we experimentally comapre this heuristic with state-of-the-art approximation algorithms on a number of benchmark problems to see whether insight in what variables are relevant can speed up MAP computations. Keywords: Bayesian networks, programming, experimental algorithm comparison. Prerequisite course: Theoretical Foundations for Cognitive Agents and/or Bayesian Networks. Knowledge of C++ is necessary!

Complexity of Bayesian inferences(AI or CS project)

I did my PhD research on the computational complexity of various problems in Bayesian networks, such as monotonicity, sensitivity and parameter tuning, and finding the k-th best explanation. In the p-CABI project we study the complexity of approximate inferences, in particular with respect to the predictive processing account. Are you interested in collaboration within this project and/or study (parameterized) complexity of some computational problem in Bayesian networks, this might define an interesting thesis project to you. Keywords: Bayesian networks, approximate inference, (parameterized) computational complexity.