Home
I created this website to give a simple overview of my research projects.

Other profiles: You may contact me by emailing to lbogaardt@gmail.com.
Research
The following are my articles, published or currently in preparation for publication.
A Model of Individual BMI Trajectories
L. Bogaardt, A. van Giessen, H.S.J. Picavet and H.C. Boshuizen. "A Model of Individual BMI Trajectories". Mathematical Medicine and Biology, 2024-01-02.

Abstract: A risk factor model of BMI is an important building block of health simulations aimed at estimating government policy effects with regard to overweight and obesity. We created a model which generates representative population level distributions and which also mimics realistic BMI trajectories at an individual level so that policies aimed at individuals can be simulated. The model is constructed by combining several datasets. First, the population level distribution is extracted from a large, cross-sectional dataset. The trend in this distribution is estimated from historical data. In addition, longitudinal data is used to model how individuals move along typical trajectories over time. The model faithfully describes the population level distribution of BMI, stratified by sex, level of education and age. It is able to generate life course trajectories for individuals which seem plausible but it does not capture extreme fluctuations, such as rapid weight loss.

Link to document: PDF
Link to supplementary material: Directory
Dataset Reduction Techniques to Speed Up SVD Analyses on Big Geo-Datasets
L. Bogaardt, R. Goncalves, R. Zurita-Milla and E. Izquierdo-Verdiguier. "Dataset Reduction Techniques to Speed Up SVD Analyses on Big Geo-Datasets". International Journal of Geo-Information, 2019-01-26; 8:1--13. doi:10.3390/ijgi8020055.

Abstract: The Singular Value Decomposition (SVD) is a mathematical procedure with multiple applications in the geosciences. For instance, it is used in dimensionality reduction and as a support operator for various analytical tasks applicable to spatio-temporal data. Performing SVD analyses on large datasets, however, can be computationally costly, time consuming, and sometimes practically infeasible. However, techniques exist to arrive at the same output, or at a close approximation, which requires far less effort. This article examines several such techniques in relation to the inherent scale of the structure within the data. When the values of a dataset vary slowly, e.g., in a spatial field of temperature over a country, there is autocorrelation and the field contains large scale structure. Datasets do not need a high resolution to describe such fields and their analysis can benefit from alternative SVD techniques based on rank deficiency, coarsening, or matrix factorization approaches. We use both simulated Gaussian Random Fields with various levels of autocorrelation and real-world geospatial datasets to illustrate our study while examining the accuracy of various SVD techniques. As the main result, this article provides researchers with a decision tree indicating which technique to use when and predicting the resulting level of accuracy based on the dataset’s structure scale.

Link to document: PDF
Link to supplementary material: Directory
Estimating Subgraph Generation Models to Understand Large Network Formation
L. Bogaardt and F. W. Takes. "Estimating Subgraph Generation Models to Understand Large Network Formation". 2018 IEEE 14th International Conference on e-Science, 2018-10-29; 375-376. doi:10.1109/eScience.2018.00106.

Abstract: Recently, a new network formation model was proposed: SUGM. Our research looks into a method to estimate the parameters of this model based on the subgraph census.

Link to document: PDF
Link to supplementary material: Directory
Amplifiers and the Origin of Animal Signals
L. Bogaardt and R.A. Johnstone. "Amplifiers and the Origin of Animal Signals". Proceedings of the Royal Society B, 2016-06-08;283:1--6. doi:10.1098/rspb.2016.0324.

Abstract: In 1989, Hasson introduced the concept of an 'amplifier' within animal communication. This display reduces errors in the assessment of traits for which there is direct selection and renders differences in quality among animals more obvious. Amplifiers can evolve to fixation via the benefit they confer on high quality animals. However, they also impose a cost on low quality animals by revealing their lower quality, potentially leading these to refrain from amplifying. Hence, it was suggested that, if the level of amplification correlates with quality, direct choice for the amplifying display might emerge. Using the framework of signal detection theory, this article shows that, if the use of an amplifier is observable, direct choice for the amplifying display can indeed evolve. Consequently, low quality animals may choose to amplify to some extent as well, even though this reveals their lower quality. In effect, the amplifier evolves to become a signal in its own right. We show that, since amplifiers can evolve without direct female choice and are likely to become correlated with male quality, selection for quality-dependent amplification provides a simple explanation for the origin of reliable signals in the absence of pre-existing preferences.

Related work:
The Evolution of Signals and Amplifiers

Link to document: PDF
Link to supplementary material: Directory
Other Work
The following are non-published research essays, written during my studies.
The Evolution of Signals and Amplifiers
M.Phil. Thesis, 30-09-2014: "The Evolution of Signals and Amplifiers".

During my Master's programme at the University of Cambridge, in the UK, I wrote my Master's Thesis in Zoology on a type of animal communication. In particular, I modelled the evolution of 'amplifiers'. This research paper was supervised by Prof. Dr. Rufus Johnstone. In nature, information is often transferred between animals via displays. An amplifier is a display which reduces errors in perception of other characteristics for which there is direct selection. I found that, via its benefit to high quality animals, the amplifier can evolve to fixation and that direct choice for the amplifier can emerge if the amplifier is observable. Consequently, low quality animals may be seen to make use of the display as well, even though it amplifies their lower quality. I used both evolutionary game theory and signal detection theory to model this behaviour. I also extended signal detection theory to two dimensions and used this model to analyse handicap signalling in a novel manner. I found that the signalling equilibrium was more stable when animals relied on multiple displays.

Abstract: In 1989, Hasson published an article which introduced the concept of an amplifier within animal communication. This display would reduce errors in perception of other characteristics for which there was direct selection. Via its benefit to high quality animals, the amplifier can evolve to fixation and it was suggested that direct choice for the amplifier could emerge. This thesis models the evolution of amplifiers, showing that, if the use of an amplifying display is observable, direct choice will indeed evolve. Consequently, low quality animals may be seen to make use of the display as well, even though it amplifies their lower quality. The two modelling frameworks used in this thesis, evolutionary game theory and signal detection theory, also show how amplifiers can lead to preferences for other types of displays, such as handicap signals, and are used to model the dynamics of such displays. Finally, a simple experiment is conducted to test whether a specific pattern functions as an amplifier.

Related work:
Amplifiers and the Origin of Animal Signals

Link to document: PDF
Link to supplementary material: Directory
Kaluza-Klein Theory in Quantum Gravity
M.Sc. Thesis, 20-09-2013: "Kaluza-Klein Theory in Quantum Gravity".

During my Master's programme at Imperial College London, in the UK, I wrote my Master's Thesis in Theoretical Physics on a specific approach to Quantum Gravity, named causal-set theory. This approach assumes spacetime is made of discrete points with a particular causal structure. This research paper was supervised by Dr. Leron Borsten. In my paper, I discuss a method which aims to combine causal-set theory with Kaluza-Klein theory. Kaluza-Klein theory suggests a way of unifying electromagnetism and gravity by assuming a fifth dimension. I start by reviewing various technologies within each of the theories. In particular, I look at the concept of coarse-graining and at two definitions of the Ricci scalar in a causal-set. Using this knowledge, I propose a method which manages to define the four-dimensional Ricci scalar, the action of a gauge field and of a scalar field by starting with a five-dimensional causal-set. I conclude that the standard Kaluza-Klein process can also be applied to a causal-set, which allows one to unify electromagnetism with gravity within this approach to Quantum Gravity.

Abstract: In this paper, we discuss a method which aims to combine Kaluza-Klein theory with causal-set theory. We start by reviewing various technologies within each of the theories. In particular, we look at the concept of coarse-graining and at two definitions of the Ricci scalar in a causal-set. Using this knowledge, the proposed method manages to define the four-dimensional Ricci scalar, the action of a gauge field and a scalar field by starting with a five-dimensional causal-set. We conclude that the standard Kaluza-Klein process, which unifies electromagnetism with gravity, can also be applied to a causal-set.

Link to document: PDF
Link to supplementary material: Directory
The Evolutionary Foundation of Probability Weighting and Hyperbolic Discounting
M.Sc. Thesis, 29-05-2012: "The Evolutionary Foundation of Probability Weighting and Hyperbolic Discounting and Their Intimate Connection".

During my Master's programme at Lund University, in Sweden, I wrote my Master's Thesis in Economics on the evolutionary foundation of two aspects of behavioural economics, and of Prospect Theory in particular: probability weighting and hyperbolic discounting. This research paper was supervised by Prof. Dr. Jerker Holm. In particular, I examined a model which showed how probability weighting may be evolutionarily advantageous. I extended on this model by allowing for a richer variety of weighting and by combining different sub-games within a larger model. I found that the only type of weighting which survived the evolutionary process was the one which agreed with empirical data. In my research, I also suggested that there may be an intimate connection between probability weighting and hyperbolic discounting, as there are many similarities between the two behaviours. I examined how the previous model for probability weighting could be reinterpreted to include hyperbolic discounting and found that certain properties, such as the credible threat, were appropriate for reinterpretation while other properties, such as the symmetry between the two strategic choices in the game, were inappropriate for allowing the model to be reinterpreted as one including future discounting.

Abstract: In this paper, we delve into the evolutionary foundation of both probability weighting and hyperbolic discounting. We argue that it is evolution which selects between various kinds of people those who have the optimal preference profile. As such, utility functions and behavioural biases can be explained from an evolutionary perspective. We also argue that there may be an intimate connection between hyperbolic discounting and probability weighting. This follows from the similarities between these two deviations from what normative economic theory would consider 'rational'. Following the suggestion of this intimate connection, we analyse a model which is capable of explaining why probability weighting may have been advantageous for our ancestors and we look whether we can reinterpret this model to also include hyperbolic discounting. This turns out to be difficult, however, as an analysis of the properties of our model makes clear.

Link to document: PDF
Link to supplementary material: Directory
Link to document [external]: Lund University
The Dynamics of Revelation
B.A. Thesis, 01-07-2011: "The Dynamics of Revelation".

During my Bachelor's programme at University College Utrecht, in the Netherlands, I wrote my Bachelor's Thesis in Social Science on a variation of the game theoretical Signalling Game. This research paper was supervised by Dr. Kris de Jaegher. In particular, I examined mixed equilibria of the Signalling Game under the replicator dynamics and made a variation to reinterpret the game as one of revelation or full disclosure. My findings concerning mixed equilibria were that in the real world, when receivers possess an infinite strategy space, mixed equilibria evolve naturally. One criticism of this approach is that it relies heavily on pruning. In my variation of the model based on revelation, it becomes easier to argue for pruning and the dynamics of revelation ended up being simpler than those for costly signalling. The dynamics showed that the (Separating, Split) strategy was the most likely equilibrium. The success of this project relied on my mathematical skills and my spatial visualization ability. My findings were obtained by using Mathematica.

Abstract: In this paper, we will examine the dynamics of Spence's model of costly signalling and will determine under what conditions mixed equilibria emerge. One important finding is that the way in which the pooling strategy is defined, whether its revenue is equal to the high or low one or somewhere in between, will influence whether we see any mixed equilibria within phase space. Another key point is that there are no fundamental differences between dynamics with and without mixed equilibria. In the latter case, these mixed equilibria are simply positioned outside of phase space and we can only observe a small part of a larger dynamic. Lastly, it is suggested that these mixed equilibria may naturally evolve if receivers are allowed to choose the value of their averaged revenue themselves within an infinite strategy space and are maximizing their pay-off. Having established these ideas about costly signalling, this paper continues to examine a variation on the model. Revelation will be included, by which it is meant that senders will not have the option to signal, but must choose whether to fully reveal their type to the receiver or not. It is found that the most likely equilibrium in this case is one in which low quality senders do not reveal when there is a cost associated with this action and high quality senders do reveal their type. Receivers will award their revenues accordingly.

Link to document: PDF
Link to supplementary material: Directory
The Path Integral Method to the Aharonov-Bohm Effect
B.Sc. Thesis, 02-06-2010: "The Path Integral Method to the Aharonov-Bohm Effect".

During my Bachelor's programme at University College Utrecht, in the Netherlands, I wrote my Bachelor's Thesis in Science on the Aharonov-Bohm effect. This review paper was supervised by Dr. Anton van de Ven. In particular, I looked at the solution for the wavefunction, calculated by means of the path integral formulation, for which an understanding of Electrodynamics, Topology and Lagrangian Mechanics was needed. My specific approach was to fuse two successful methods into a single, more intelligible method of calculation and I found that the final result for the interference pattern was equal to the predicted effect using the method of solving the Schrodinger equation. The path integral method allowed for a more intuitive derivation of the Aharonov-Bohm effect and the problem of how to define the Hamiltonian was solved in a very natural way by means of summing over all homotopy classes.

Abstract: In this review of the Aharonov-Bohm effect, we look at the path integral method to solving the dynamics of the system. In particular, we examine the general form of the propagator and its relation to the multiply connectedness of the space. We determine the full propagator by summing over the partial propagators in each homotopy class. This result is compared with the solution to the Schrodinger equation as presented by Aharonov and Bohm themselves. We also evaluate the wave function for both cases and find that the predictions for the interference pattern match. The importance of the non-self-adjointness of the Hamiltonian is also made clear and an interpretation of the magnetic flux as an extension to the Hamiltonian is presented.

Link to document: PDF
Link to supplementary material: Directory
Complex Network Dynamics in Economics
B.A. Honours essay, 31-05-2010: "On Complex Network Dynamics in Economics or how Network Structures evolve within the Meso-level".

During my Bachelor's programme at University College Utrecht, in the Netherlands, I was selected for an interdisciplinary honours programme called Sirius, funded by the Dutch government. This programme allowed three other students and myself to set up our own course, under the guidance of three professors. The focus of this course was Complexity Theory and Network Theory for which an understanding of non-linear dynamics, chaos theory and agent-based modelling was needed. Part of the course consisted of guest-lectures which we had organised ourselves. The other part consisted of our own research and group discussions which culminated in my final paper on the evolution of network structures in Economics. This research paper was supervised by Prof. Dr. Koen Frenken. My findings suggested that complex networks can grow within the meso-level of an economy due to microeconomic behaviour and that there are several mechanisms, such as a growth under geographical bias, which may lead to the economy showing the 'small-world' property.

Abstract: There is a recurring problem in economics and the models it uses; real world data is rarely in agreement with the predictions of the economic theory. There might be a need for a new look on the economy, taking away some of the unrealistic assumptions of traditional economics. This research focuses on the evolution of networks in economics. The network approach may provide an answer to some fundamental questions in economic theory. In this paper, it is suggested that the appropriate level of network analysis is the meso-level. Furthermore, it is shown that there are several different methods to model the growth in nodes and edges, such as preferential attachment or neighbour attachment. A suggestion is also made to include edge removal in these systems. The use of networks will be justified by simple arguments; at the same time, it is shown that the complex behaviour of the economy can be explained using such a simple starting point.

Link to document: PDF
Link to supplementary material: Directory
About Me
I am currently a statistician and modeller at the National Institute for Public Health and the Environment (RIVM). I collaborate on multiple projects at various departments within RIVM.

More information can be found on my LinkedIn profile or on my Curriculum Vitae. You may also contact me by emailing to lbogaardt@gmail.com.