< Terug naar vorige pagina

Onderzoeker

Bert Verbruggen

  • Onderzoeksexpertise:

    Objective

    Machine learning has been growing tremendously the past decades both as a research field and in its numerous applications. Artificial neural networks (ANN) are taking a more prominent role in the advancements in more recent years. Common examples of neural networks include pattern recognition and classification problems as well as natural language processing. Although these applications share common factors their design and architecture are very different.

    A very successful application of ANN’s has been image recognition and pattern recognition within visual data. To this end the construction of Convolutional Neural Networks has been a major contribution. In such architectures the neurons within the network are configured as small windows passing over the whole image input step by step. These Windows read a fraction of the image frame by frame and check for specific structures. As the complexity of these problems increases the networks consist of more and more layers, giving rise to deep neural networks.

    When such a network is learning how to perform its tasks, it changes internal parameters to best fit the problem. With more layers and more the number of parameters becomes very large and complex. Because of this complexity the training of such networks is called deep learning.

    By design such a model has an internal structure which is highly nonlinear. This naturally puts challenges to understanding how the model is going through its training phase, giving rise to the idea of operating a black box system.

    Methodology

    Although the nature of the internal structure of these models is highly nonlinear, there are theories at hand to work with nonlinear behaviour. In major fields of mathematics, physics and Biology and even in the modelling of epidemic outbreaks like with the Covid-19 virus, nonlinear dynamics are key to understanding and predicting these phenomena.

    These theories and their foundations can give more insight in the learning phase of artificial neural networks. By modelling real-life examples of nonlinear concepts in deep learning models I look for similarities in the description of how information is progressed and handled internally. The changing dynamics within the inner structure of these networks can be studied and compared to known concepts in the field nonlinear dynamics and chaos theory.

    Relevance

    In getting a better understanding of the learning process of artificial neural networks we can apply the technology for ever further improvements in our daily life. In addition we can increase our knowledge of the leading theories of nonlinear processes by applying them in the virtual environment of the computer models and machine learning tools.

  • Trefwoorden:Economie en toegepaste economische wetenschappen
  • Disciplines:Medische en gezondheidswetenschappen
  • Gebruikers van onderzoeksexpertise:

    Objective

    Machine learning has been growing tremendously the past decades both as a research field and in its numerous applications. Artificial neural networks (ANN) are taking a more prominent role in the advancements in more recent years. Common examples of neural networks include pattern recognition and classification problems as well as natural language processing. Although these applications share common factors their design and architecture are very different.

    A very successful application of ANN’s has been image recognition and pattern recognition within visual data. To this end the construction of Convolutional Neural Networks has been a major contribution. In such architectures the neurons within the network are configured as small windows passing over the whole image input step by step. These Windows read a fraction of the image frame by frame and check for specific structures. As the complexity of these problems increases the networks consist of more and more layers, giving rise to deep neural networks.

    When such a network is learning how to perform its tasks, it changes internal parameters to best fit the problem. With more layers and more the number of parameters becomes very large and complex. Because of this complexity the training of such networks is called deep learning.

    By design such a model has an internal structure which is highly nonlinear. This naturally puts challenges to understanding how the model is going through its training phase, giving rise to the idea of operating a black box system.

    Methodology

    Although the nature of the internal structure of these models is highly nonlinear, there are theories at hand to work with nonlinear behaviour. In major fields of mathematics, physics and Biology and even in the modelling of epidemic outbreaks like with the Covid-19 virus, nonlinear dynamics are key to understanding and predicting these phenomena.

    These theories and their foundations can give more insight in the learning phase of artificial neural networks. By modelling real-life examples of nonlinear concepts in deep learning models I look for similarities in the description of how information is progressed and handled internally. The changing dynamics within the inner structure of these networks can be studied and compared to known concepts in the field nonlinear dynamics and chaos theory.

    Relevance

    In getting a better understanding of the learning process of artificial neural networks we can apply the technology for ever further improvements in our daily life. In addition we can increase our knowledge of the leading theories of nonlinear processes by applying them in the virtual environment of the computer models and machine learning tools.