Title Promoter Affiliations Abstract "Influence of distinct process-induced (micro)structures on the in vitro starch digestibility of common beans: A kinetic approach" "Tara Grauwet" "Centre for Food and Microbial Technology" "Transitioning towards food systems favourable for both human health and environmental sustainability has been recently identified as an immediate challenge. In this context, a significant increase in consumption of plant-based foods (e.g. cereals, fruits, legumes, nuts, vegetables) is being strongly promoted to improve health and environmental benefits. Consequently, the development of safe and nutritious food products from sustainable sources is an important action path to which food scientists and technologists could and should contribute. Pulses, dry seeds of the legume family, constitute an interesting food group for tackling the diet-health-environment challenge. Different macro- and micro‑nutrients are present in these seeds, making part of an intrinsically complex structural network. Changes to this network are induced by processing, which is required to induce palatable conditions and improve digestive functionality. Process-induced structural changes are likely to have a determinant effect on the digestive response of pulses. Nevertheless, fundamental studies aiming to unravel their process‑structure-digestive functionality relationships are scarce.In this doctoral research, it was hypothesised that processing could be used as engineering strategy to generate common bean (micro)structures with tailored in vitro starch digestion kinetics. Common beans were chosen as representative case study due to their importance in terms of production and human consumption. Starch was selected as component of interest because it is the most bio-encapsulated nutrient in pulses. Initially, process-structure relationships were established by applying different (combinations of) processing variables to common beans, followed by characterisation of structural properties at the macro and micro levels as a function of processing time. Next, (process-induced) structure‑digestive function relationships were studied by in vitro simulated experiments where the kinetics of starch digestion during the small intestinal phase was qualitative and/or qualitatively evaluated in samples with specific (micro)structural properties.Application of conventional (95°C, 0.1 MPa, f(t)) and alternative (25°C, 600 MPa, f(t) or 95°C, 600 MPa, f(t)) treatments showed that palatable common beans were only achieved when high temperature was included as process variable. Contrarily, common beans subjected to 600 MPa and 25°C failed to soften regardless of processing time. Similar softening profiles were obtained upon application of the two treatments at high temperature, either in combination with high hydrostatic pressure or not. These two treatments, followed by standardised mechanical disintegration, also resulted in similar microstructural properties. Specifically, cell clusters and open cells were characteristic microstructures as a result of short processing times, probably because pectin in the middle lamella and primary cell wall was not yet (extensively) solubilised. An evolution towards well separated, individual closed cells was observed as processing time increased, most likely driven by thermally-induced pectin solubilisation. Given that this microstructural characterisation was always performed after a standardised step of particle size reduction, an open question remained regarding the role of human mastication as mechanical disintegration technique. Therefore, a mastication study including 20 participants was performed. Samples subjected to the study were thermally processed common beans with different process-induced hardness levels (95°C, 0.1 MPa, f(t)). From this experiment, it was concluded that individual mastication behaviour did not significantly influence the particle size distribution of oral boluses, while thermal process intensity (i.e. residual hardness level) did. The most representative starch-rich fraction of oral boluses was constituted by individual closed cells, which relative contribution to the total bolus mass was higher at longer processing times. A simultaneous decrease in the mass percentage of larger starch‑containing fractions (e.g. cell clusters) occurred with the increase of individual cells.Individual cotyledon closed cells were the most characteristic fraction of palatable common beans after being subjected to in vitro as well as to in vivo mechanical disintegration. This anticipated a key role of the remaining starch‑surrounding barriers (cell wall and protein matrix) and their process‑induced status during subsequent digestion. In vitro simulated digestion of both heterogeneous oral boluses and homogeneous isolated cotyledon cells from thermally processed common beans resulted in significant differences at the level of starch digestion kinetics. These differences were quantified by mathematical modelling of the kinetic data using a reparametrized logistic model, which was selected for the first time in the context of starch digestion in this work. The quantified differences were linked to process-induced modifications at the level of starch-surrounding barriers. More in detail, longer lag phases in samples obtained after shorter processing times were related to cells with lower cell wall permeability and/or a more compact protein network. Similarly, higher reaction rate constants in samples obtained after longer thermal treatments were linked to the amount of enzyme binding and breaking down starch chains per minute, following its movement across and around the above mentioned starch-surrounding barriers.In the last part of this doctoral research, mechanistic insight into the in vitro starch digestion process of cotyledon cells isolated after thermal treatment of common beans (95°C, 0.1 MPa, f(t)) was obtained. To this end, digests were separated into supernatant and pellet fractions. Subsequently, the isolated fractions were subjected to different qualitative and quantitative characterisations. It was demonstrated for the first time that digestion of cellular structures from common bean cotyledons occurs inside the cell, regardless of thermal processing time. Additionally, a digestion mechanism of three steps was postulated. This mechanism comprises (i) enzyme diffusion through the cell wall followed by overcoming of any hindrance exerted by the cytoplasmatic contents (e.g. protein matrix), (ii) adsorption on and hydrolysis of starch granules from the periphery towards the core of the cell, combined with (iii) diffusion of digestion products towards the outside of the cellular space. Parallel protein disappearance, possibly following the same digestion mechanism, was also proven by microscopic evaluation. Finally, different rate‑limiting steps for starch digestion in common bean cotyledon cells were postulated depending on the intensity of the thermal process applied.Overall, this PhD research evidences the potential of targeted processing as engineering strategy to modulate in situ the in vitro starch digestion kinetics of common beans through modification of their structural properties. The processing conditions applied in this work can be easily reproduced at both household and industrial levels, demonstrating its promising applicability potential." "Colour appearance modelling for self-luminous colours." "Peter Hanselaer" "Electrical Engineering Technology (ESAT), Ghent and Aalst Technology Campuses" "The assessment of the quality and comfort of lighting requires a good understanding of the correlation between the optical properties of the stimuli, such as luminance, and their corresponding perceptual attributes such as brightness, hue, colourfulness, lightness, chroma and saturation. To facilitate this, colour appearance models have been developed that simulate many of the physiological processes that take place in the human visual system. The colour of objects originates from the interaction between a light source and the physical properties of the object. Object colours (or surface colours) belong to the category of so-called related colours because they are perceived within the context of the colour of the illuminant. For these stimuli, a colour appearance model was developed in 2002 which has been widely accepted by the international community. It has been applied in printing and display technology to reproduce the correct colour appearance of images under different viewing conditions. However, there is at present no agreed model for the evaluation of self-luminous stimuli, such as light sources, seen against an independent luminous background. This situation however, frequently occurs in everyday situations such as the perception of self-luminous panels and walls using LEDs and the perception of traffic signals and advertisements viewed both by day and by night. The aim of this project is to gather physical and visual data through measurements and observer assessments respectively. From these data, an improved or new colour appearance model will be developed which will increase our understanding of the mechanisms of perception of light sources seen again other light. In the long term, this model will provide an interesting tool to describe the visual experience of the total lit environment." "Design and microfabrication of flexible neural probes for chronic applications" "Bob Puers" "ESAT - MICAS, Microelectronics and Sensors" "In the last decades, the clinical applications of neural devices have increased exponentially. Accommodated by the ever increasing miniaturization and affordability of sensors and circuitry more and more developments transitioned from the field of experimental research towards clinical practice. Nowadays, we possess a wide variety of tools that can be used to assess different aspects of the central and peripheral nervous system. These include non-invasive tools to monitor brain activity, such as electroencephalography, and more invasive penetrating devices such as cochlear implants and brain stimulators.While adequate long-term performance was demonstrated in the above mentioned applications, more advanced neural interfacing devices still face significant challenges concerning chronic reliability. After digging through several decades of research it became clear that the immune response, which is evoked by the implantation of the device, plays a crucial role in the performance of the neural electrode. In chapter 2 of this dissertation we take a closer look at the electrode-tissue interaction and identify the factors that are responsible for the premature failure of implanted neural devices.In an attempt to minimize the evoked effects and increase the electrode lifetime there is a noticeable evolution towards the use of flexible materials. Using softer, more compliant, materials with mechanical properties resembling those of the brain tissue attenuates the so-called foreign body reaction, and improves the chronic applicability of the implanted devices. Apart from its beneficial effect on electrode lifetime, flexible materials bring along several other opportunities. Chapter 3 of this dissertation describes the development of the ‘flower’ electrode which was designed to drape the inside of an abnormal brain cavity and record neural activity. Due to the deep location of the target cavity, a bespoke implantation procedure was developed with the goal of minimizing damage to the tissue surrounding the implantation trajectory. During this procedure we take advantage of the electrode’s flexibility by temporarily folding it onto itself and packaging it in a cannula. After implantation the electrode is pushed through the cannula and into the cavity, where it unfolds and takes on a three-dimensional shape conform to the cavity wall. Due to its design, the planar structure can take on a hemispherical shape without inducing any stress or wrinkling. An in vivo experiment was performed to validate the developed implantation procedure and the correct operation of the electrode. Analysis of the recorded data indicated that it is possible to record signals of biological origin.Chapter 4 of this dissertation focuses on the practical problems associated with the use of highly flexible neural probes. The thin polymer-based implants are often too flexible for implantation and a mechanical reinforcement is needed to temporarily increase the probe stiffness. In the quest to find a suitable material, dextran came forward as an ideal candidate as it is cheap, easily accessible and its chemical composition (polymeric carbohydrate) allowed tuning of the coatings dissolution rate. Practically, this means that a fast dissolution time allows the electrode to regain its highly flexible state within minutes after implantation, and thus limits the evoked immune response. These hypotheses were analysed by conducting a long-term in vivo experiment. Not only did the results show that the amount of glial scarring was strongly reduced, but the area that was initially taken up by the dextran coating was re-occupied with active neurons. Additionally, there was no noticeable decrease in neuronal density in the vicinity of the implant.In addition to using highly flexible materials, the foreign body response can be minimized by integrating bioactive components which play an active role in the suppression of the evoked immune response and the regeneration of the neurons that were damaged during the implantation. In chapter 5 we take a closer look at how these bio-active components influence the electrode-tissue interaction and we can use them to increase the lifetime of neural devices. The design and microfabrication process of a miniaturized neurotrophic electrode is presented. The electrode consists of a hollow tubes which is filled with a growthfactor loaded hydrogel. After implantation a foreign body response will be elicited. Apart from phagocytising the cellular debris resulting from the implantation the activated microglia will also start to enzymatically degrade the hydrogel in the tube, thereby slowly releasing growthfactors in the surrounding environment. The resulting chemical gradient will promote neuroregeneration and supress further astrocyte recruitment, limiting glial scarring at the site of implantation. A secondary advantage is the directional regeneration of damaged neurons into the tubes. As a result the electrical activity of the ingrown neurons can be recorded while isolating the signal from any interference caused by surrounding active neurons, drastically improving the signal-to-noise ratio.The sixth and final chapter presents the main conclusions of this work." "The search for toxin-derived modulators of voltage-gated sodium channels and their in vivo effects in autoimmune encephalomyelitis" "Jan Tytgat" "Toxicology and Pharmacology" "Voltage-gated sodium channels (Navs) are key players in the generation and propagation of electrical signals in the human body. They are found in the membrane of electro-excitable cells and underlie the event of action potentials. Due to their crucial role in excitability, malfunctioning of these channels inevitably leads to severe physiological disorders. Therefore, defect Navs are associated with diseases like epilepsy and the long QT syndrome. Recently it has been established that Navs also could play a role in the pathogenesis of multiple sclerosis (MS). This disease is hallmarked by degeneration of the myelin sheath surrounding axons,a strong inflammatory reaction and axonal loss in the CNS. Several Nav isoforms have been demonstrated to be involved in the pathology of MS. For instance, elevated levels of Nav1.2 and Nav1.6 were observed in active MS lesions. It was suggested that the increase in sodium current that this brings along, is responsible for the neuronal damage that is noticed in later stages of the disease. Block of these channels possibly could lead to neuroprotection. Owing to these results, clinical trials were set up to validate the effects of Nav blockers. However, the tested Nav blockers target a large range of Navs without any specificity. In this thesis, novel compounds were sought that specifically target the Nav isoforms mentioned to be important in MS. This was achieved by looking into the venoms of sea anemones, scorpions, solitary wasps and cone snails as it is known to contain toxins that act on specific Nav isoforms. In the first chapter, two resembling toxins isolated from thevenom of two solitary wasps were investigated. The β-pompilidotoxin (β-PMTX) established a specific effect on Nav1.6 and was characterized further. The toxin causes a decrease of the fast activation of Nav1.6 in a dose-dependent matter. In this way, the influx of sodium ions is diminished. In addition, the toxin generates a steady-state current. This is a typical effect of toxins binding to a specific site on Navs, called site 3. However, this toxin has the unique property of combining its typical site 3-effect with the mentioned decrease of fast activation.  The venom of cone snails is also known to comprise toxins that block Navs. These toxins are called µ-conotoxins. In the second chapter, synthetic peptides were designed, based on structures of naturally µ-occurring toxins from venoms of the cone snails Conus kinoshitai and Conus bullatus. The main goal was to develop a miniaturized peptide that maintained the typical features of the original peptides. These features include i) a strong block on Navs with activities in the nanomolar range, ii) an effect on specific Nav isoforms and iii) a short sequence. The initial designed peptide, called Mini peptide, demonstrated a block on Nav1.2, Nav1.4 and Nav1.6 at micromolar concentrations. Based on this Mini peptide, a series of analogues was subsequently developed. Three peptides (Midi, R1-Midi and R2-Midi) were provento be potent blockers of Nav1.2, Nav1.4 and Nav1.6, acting at nanomolar concentrations. In addition, an NMR structure was determined forthe Midi peptide, which revealed no α-helix as a secondary structure. This was the first µ-conotoxin derivative for which this unique feature was witnessed. In the last chapter, the in vivo effects of β-PMTX and the Midi peptide were investigated in a commonly used mouse model of MS, called experimental autoimmune encephalomyelitis. Based on the available results for non-specific Nav blockers, an amelioration of the clinical scores was expected for mice treated with the Midi peptide. Unexpectedly, the Midi peptide caused a worsening of the disease pattern in these mice. Several techniques were used to find out which cellular compounds were underlying the observed deterioration. It was demonstrated that the expression levels of Nav1.2 and Nav1.6 were elevated in brain tissues of mice treated with the Midi peptide. Although the precise origin of the deleterious effects that we saw in the mice receiving the Midi peptide remains unanswered, the results of this chapter confirm the complex roles of Navs in EAE and possibly MS. Our study could not strengthen the rationale for the use of Nav blockers for the treatment of MS. Nonetheless, it underlines that there is an urgent need to further clarify the role of Navs in the pathology of MS." "Retinoic acid and palatogenesis" "Carine Carels" "Department of Human Genetics" "Orofacial clefts (OFC) are among the most common birth defects in humans. OFC can be syndromic and non-syndromic. One of the main syndromic forms occurs in the transcription factor TP63-related syndromes, in which OFC appears as the cardinal feature together with ectodermal dysplasia and limb malformations. Vitamin A is an important regulator of embryonic proliferation, differentiation and apoptosis. Its active metabolite retinoic acid (RA) regulates the specification of cell identity and gene expression through the activation of nuclear receptors, which bind to specific regulatory regions of target genes called retinoic acid response elements (RARE).The main goal of this project is to investigate the interaction between RA and TP63 in congenital ectodermal disorders. In this project, we further aim to (1) investigate the effects of RA on the functional properties of keratinocytes obtained from patients with non-syndromic and with syndromic OFC (EEC) patients and to (2) analyze the effects of RA on the in vitro fusion of isolated mouse palatal shelves. The results will contribute to the understanding the etiology of OFC and provide clues for diagnosis, treatment and prevention." "Precision measurements of the electron energy distribution in nuclear beta decays" "Kazimierz Bodek, Nathal Severijns" "Nuclear and Radiation Physics" "Although nuclear β decays have been studied for almost a century, there are still questions that can be answered through precision measurements of the energy distribution of emitted electrons. The shape of the β spectrum reflects not only the weak interaction responsible for the decay. It is also sensitive to the strong interaction which confines the decaying quark in the nucleon. The electromagnetic force also plays a role since the ejected electron interacts with the charged nucleus. Therefore, measurements of the β spectrum shape are an important tool in understanding all the interactions combined in the Standard Model (SM) and high precision is required to disentangle the effects of different interactions. Moreover, these measurements can even shed some light on New Physics (NP), if discrepancies between the shape of a measured spectrum and the one predicted by the SM are observed.One of the open questions is the validity of the V A form of the weak interaction Lagrangian. Measurements of the β spectrum directly address this issue, since the exotic currents (i.e. other than V and A) affect the spectrum shape through the so-called Fierz interference term b. An experiment sensitive to NP requires a precision at the level of 10−3 to be able to compete with energy frontier measurements performed at the Large Hadron Collider.When precision is at stake,  the success of an experiment is determined by a proper understanding of its systematic uncertainties. The main limitation of the precision of spectra measurements comes from the properties of the energy detector being used in the experiment. In particular, from the accuracy of the detector energy response and of the energy deposition model in the detector active volume. The miniBETA spectrometer was designed to measure β spectra with a precision at the level of 10−3. It combines a plastic scintillator energy detector with a multi-wire drift chamber where electrons emitted from a decaying source are traced in three dimensions.Using the plastic scintillator minimizes systematic effects related to the model of the energy deposition in the detector active volume. Due to the low Z of the plastic and a choice of the detector thickness, the probability that the energy is fully deposited is highest. Keeping the probability of other processes (i.e. backscattering, bremsstrahlung, and transmission) low reduces the influence of their uncertainties on the predicted spectrum. Particle tracking filters the measured spectrum from events not originating from electrons emitted from the source. It also allows for a recognition of backscattering events, further reducing their influence.The thesis describes the commissioning of the miniBETA spectrometer. Several gas mixtures of helium and isobutane were tested as media for tracking particles. Calibrations of the drift chamber response were based on measurements of cosmic muons. The mean efficiency of detecting an ionizing particle in a single drift cell reached 0.98 for mixtures at 600 mbar and 0.95 at 300 mbar. The spatial resolution of a single cell, determined from the drift time, reached 0.4 mm and 0.8 mm, respectively. The resolution determined from the charge division was roughly an order of magnitude worse. Several corrections dealing with systematic effects were investigated, like a correction for a signal wire misalignment sensitive to shifts as small as 0.02 mm.It was shown that particle tracking improves the precision of spectrum measurements, as the response of the spectrometer depends on the position where the scintillator was hit. Preliminary calibration of the energy response of the spectrometer was performed with a 207Bi source. The calibration was based on fitting the measured spectrum with a spectrum being a convolution of simulated energy deposited in the scintillator and the spectrometer response. The energy resolution of the prototypical setup is ρ1MeV = 7.6%. The uncertainty of the scale parameter, hindering the extraction of energy dependent terms (i.e. the Fierz term and weak magnetism) from the spectrum shape, is 5 *10−3.   These values will be significantly improved in a new setup, with an optimized scintillator geometry and better homogeneity of the energy response." "Plast-i-Com: Efficientreuse of contaminated polymers and polymer blends through compatibilisation andstabilisation" "Isabel De Schrijver" "R&D Plastic Characterization, Processing & Recycling" "Project goalsOne of the biggest challenges of the industry is to produce in a sustainable way. As a consequence, the reduction, re-use, and upgrade of waste is becoming increasingly important. Moreover, the raising prices of raw materials are pushing the industry to economize on raw materials and energy. Some companies are already experienced in the re-use of intern polymer waste without a significant downgrading of mechanical properties. However, it becomes more difficult when companies have to recycle waste which might be contaminated by amounts of other polymers. In these cases, the recyclates often demonstrate poor mechanical properties, since most polymers are not compatible and therefore difficult to mix homogeneously and with a well-defined fine distribution in one another. In other cases, the presence of another polymer can lead to a severe degradation due to the instability of some materials at high temperatures or a catalytic effect of contaminants on further degradation. Detection of impurities, sorting and separation of different polymers makes the recycling process more expensive and rarely offers a 100% guarantee on the removal of all impurities. Activitites and results Because of the unsatisfactory quality of mixed recyclates, they are recycled into less valuable products (downgrading) or not recycled at all. In the latter case, waste is e.g. deposited into landfill or incinerated with or even without energy recovery. A route to improve processability and to optimize the polymer quality is offered by the so-called “compatibilization”. Polymer blends can be compatibilized by creating an interphase interacting with both polymer phases. Mostly copolymeric compatibilizers or even reactive compatibilisers are proposed. Although these principles have already been known for decennia, compatibilisation is hardly applied on an industrial scale, and certainly not in the processing of recycled polymers. Looking at plastic waste, many streams consist of polymers which are immiscible or so-called not-compatible. Within the project ‘plast-i-com’, it was evaluated whether specific additives, called compatibilizers, could be used to convert non-usable, ‘ready-to-incinerate’ streams into more valuable recyclate materials. This evaluation was approached from three different angles. First of all, a theoretical approach was applied, wherein chemical models were used to predict the compatibilizing efficiency of additives for the studied blends. To this end, solubility parameters were estimated for the polymers of interest, based on the Hoftyzer-Van Krevelen method. Secondly, experimental work was performed to evaluate the actual performance of the additives in different polymer blends. Processing techniques such as injection moulding and textile extrusion were used to produce the desired test samples, whereafter mechanical testing could be performed. Finally, in a third step, numerical simulations were performed of the polymer processing of (non-)compatibilized blends, either in injection moulding or extrusion, using flow and thermal behaviour properties as input data. Based on the company’s interest, different market-relevant blends were selected for evaluation, including polyolefin (PO) blends, PET- and PA contaminated PO blends, PMMA-based blends, as well as other more niche blends. For the polyolefin blends, it seemed that ethylene copolymers were most effective to increase the mechanical properties, whereas for the PET and PA-based blends reactive compatibilizers such as glycidyl methacrylate- (GMA) and maleic anhydride- (MAH) based compatibilizers proved to be very successful. For PE/PA blends for example, the PA contamination resulted in a 4 times lower impact strength compared to pure PE material. On the other hand, as also predicted by the chemical models, MAH-based compatibilizers could increase the impact strength with 200% compared to the non-compatibilized blends. Chemical modelling also showed that for the PP/PET blends the most suitable compatibilizer candidate is not available on the market yet. Nevertheless, existing grades are able to increase the mechanical properties, although to a lesser extent compared to the PE/PA blend. In general, it could be concluded that compatibilizers positively affect the elongation and impact properties of the blends, as well as the morphological distribution of the contaminant phase inside the polymer matrix. The material stiffness on the other hand was most of the times affected in a negative way. In an attempt to restore the stiffness losses, nucleating agent were evaluated, but they only seemed effective in case of virgin non-compatibilized blends. It should also be emphasized that the material source (virgin, post-industrial vs. post-consumer grades) has a significant influence on the final compatibilization success. Next to the mechanical and morphological properties, compatibilizers could also contribute to the esthetical performance of the polymer blend. Apart from evaluating the recycled blends on an end-product performance level, their stability on a processing level could also be considered. In the current project, a research strategy was developed for a company specific case to understand the acceptable variations in viscosity to still end up with the desired end-product quality after injection moulding." "Special merit regarding social valorisation - call 2012" "Wim Distelmans" "Translational Radiation Oncology and Physics, Medical Imaging and Physical Sciences" "Technology transfer (sometimes called ""'valorisation"") involves the transfer of (intellectual) property rights on academic knowledge through sale, or permission for its development, to a recipient third party, e.g. government institutions and businesses. Thus the recipient obtains the right to develop the acquired knowledge further into a product (in the broadest possible sense of the word) which may then be developed commercially, either by the recipient or a designated partner. The University, by decree, owns any research results developed within the institution. In principle this means that all (and not only technological) academic knowledge developed by a researcher and for which the University has at least partial property rights or other development rights, can be the subject of technology transfer. In certain circumstances (e.g. research under contract), property rights on research results developed at the Vrije Universiteit Brussel may be transferred to a third party on an ad hoc basis, which is why it is important to examine the legal status of the knowledge to be transferred. Properly assessing the development potential that the finding holds is a crucial factor when opening a technology transfer dossier." "Disaster Governance: Analyzing inconvenient realities and chances for resilience and sustainability" "Philip McCann, Constanza Parra Novoa" "Division of Geography and Tourism" "English summary Disaster governance Analyzing inconvenient realities and chances for resilience and sustainability A social-ecological systems approach to disaster governance In many places in the world, people are increasingly exposed to disasters.  A few recent disasters illustrate the global magnitude of the problem: the oil spill in the Gulf of Mexico in 2010, the earthquake, tsunami and nuclear disaster in Fukushima, Japan in 2011, typhoon Hajyan in the Philippines in 2013, hurricane Irma in the Caribbean in 2017, the earthquakes in Nepal in 2015, the volcano eruption in Guatemala in 2018 and the earthquake and tsunami in Indonesia in 2018. Disasters such as these lead to a disruption of societies, often cause much damage, lead to loss of lives and pose many challenges for recovery. To make things even worse, disasters are expected to increase in frequency and duration, and the causes of disasters are becoming increasingly diverse and complex. Mainly due to climate change, disasters triggered by extreme natural hazards, such as hurricanes and bushfires, will increasingly strike societies. But also human-induced disasters, such as technological disasters and ecological harms produced by an unsustainable use and exploitation of natural resources, are repeatedly threatening societies. Although these different kinds of disasters seem rather distinct, a shared characteristic of most of them is that they result from the interactions between people and their natural environment. To illustrate, natural disasters, such as hurricanes and earthquakes, are directly caused by the forces of nature. Yet, human or social factors, such as the socio-economic vulnerability of communities and the often disorganized governance of disasters, can also be blamed for influencing and exacerbating the impact of disasters. In this regard, the understanding has been growing that disasters are created by humans – or: ‘socially created’ – instead of ‘acts of God or nature’. In parallel, a shift can be observed in disaster studies. Similar to the shift in governance debates from governing to governance, disaster studies show a development from disaster management to disaster governance. This implies a shift from top-down steering by central governments and a focus on short-term solutions and emergency management, towards the multi-actor sharing of governance roles and longer-term post-disaster transitions. This PhD research focuses on the governance of disasters, because governance can be part of both the cause and solution of disasters. Moreover, it delves into those disasters that occur within and manifest the interface between human actions and natural processes. The research builds on a social-ecological systems perspective to grasp in an integral way the different processes, realities and relevant scales of interaction between natural and social processes shaping societies. Despite the destructive character of disasters, many post-disaster societies express the wish to ‘rebuild-back-better’. From this perspective, post-disaster societies often show many bottom-up initiatives to capitalize the momentum for recovering towards a better system. As such, the aftermath of a disaster provides an opportunity to develop towards more resilient and sustainable societies. Nevertheless, post-disaster learning processes rarely result in widespread improvements of governance systems and the urge to go ‘back to normal’ is often privileged at the expense of the improvement of governance systems to better deal with the dynamics and complexities posed by nature and humans. Why do societies hardly learn from disasters? What explains the societal frustration between different groups of actors in society, often leading to distrust between public, private and civil society institutions? And how can adequate governance systems be created for facilitating post-disaster recovery processes and transitions towards enhanced societal resilience and sustainability? This research contributes to the question what the role is of governance in steering transitions towards more resilient and sustainable social-ecological systems in the face of disasters. It aims to enrich the understanding of disasters and to provide insights in the role of governance and its interaction with natural and socio-institutional processes. A qualitative case-study research of three disasters Based on a qualitative international case-study research of three places in the face of disasters, this research analyzes the ways in which governance can stimulate and enable post-disaster transitions. Based on 89 in-depth interviews, participant observation and document analysis in the three different cases, insights are obtained that contribute to a better understanding of disaster governance. First, the case-study of Christchurch, New Zealand, after the earthquakes in 2011 and 2012 is presented in chapters 2 and 3. Chapter 2 highlights the value of using a social-ecological systems perspective to better understand disasters and their governance. Moreover, the case-study of Christchurch shows that disasters impact societies in a non-homogeneous way, although the governance response is often based on a homogeneous approach. As a consequence, mismatches can be observed between the needs and wishes of impacted people and the focus of the government. Chapter 3 analyzes an essential condition for resilience: learning. In the aftermath of the earthquakes in Christchurch, there were many bottom-up initiatives by civil society organizations to use the disaster as an opportunity to ‘build-back-better’, nurturing the ground for learning processes. Signs of post-disaster learning could also be observed amongst public and private institutions. However, these learning processes did not lead to widespread societal learning, adaptation and transformation, which hindered the resilience ambition of the city. In fact, learning stayed rather isolated and not bridged between different levels of the multi-level governance system. Second, the case-study of Chiloé, Chile, in the face of the Infectious Salmon Anemia (ISA) disaster is studied in chapter 4. Chiloé, an island in southern Chile, which locates a large industry of salmon production, was impacted by the virus Infectious Salmon Anemia (ISA) in 2007. The ISA disaster disrupted the local society and caused severe social, economic and environmental problems. The case-study of Chiloé analyses how resilience of some subsystems can be so rigid and inflexible that it hinders the resilience of other parts and the sustainability of wider systems. As such, this chapter explores the role of governance in interaction with socio-natural processes and found that the strong biotechnological resilience of the salmon industry, on the one hand, hinders changes aiming for resilience of the wider system, on the other hand. Third, chapter 5 presents the case-study of the earthquakes caused by gas extraction in the province of Groningen, the Netherlands. This case-study embodies the social creation of disasters in a very direct way by exploring social processes that lead to a widespread human-induced disaster. Like the first two case-studies, this case highlights the value of a social-ecological systems perspective for understanding the socio-economic vulnerabilities, political-institutional factors, technological and natural dimensions that in their combination lead to the earthquakes and related problems. The case-study focuses on governance processes that aim to increase societal resilience and sustainability, but in reality seem to hinder these ambitions due to the too rigid entanglement of public-private institutional structures, the nature of the disaster and societal distrust. Chapter 6 discusses the findings of the three cases in an integral way through the lenses of multi-level governance for encouraging post-disaster transitions. Despite the frustration and decreasing trust amongst many actors in society, various socially innovative governance practices and processes can be observed in all three cases. Yet, the lack of inclusive planning, risk awareness, risk acceptance and disaster politics seem to hinder the institutional embedding of learning processes to allow wider societal transitions. This PhD research shows that disasters have the power to uncover inconvenient realities, on the one hand. These realities often contribute to the unfolding of the disaster in the first place. On the other hand, disasters also have the power to trigger chances for resilience and sustainability. However, post-disaster learning processes do only rarely lead to broad societal transitions. Only when resilience refers to learning, adaptation and transformation, and encourages in an integral way the social, economic, and natural pillars of sustainability, post-disaster transitions towards enhanced resilience and sustainability can be enabled. The heterogeneity of disasters Disasters have the potential to shake societies and their governance systems not only temporarily, but often for years afterwards as well. It is therefore highly important to create governance processes that are both adequate to meet the needs of society in the first phases of emergency response and to also facilitate multi-actor decision-making processes about longer-term shared ambitions. Recovery processes after disasters, nevertheless, can often be characterized by frustration and a growing distrust amongst different actors in society. People often call for a more socially inclusive process as they want to have their say about the future of their places, want to get recognition for the problems they face and tend to organize themselves in all sorts of initiatives. However, people often feel discouraged in their participatory wishes by the governance approaches pursued by governments and/or private institutions. These diverging views in post-disaster contexts can be explained by the heterogeneity of disasters, resulting in a variety of challenges that disasters pose to societies. The heterogeneity of disasters refers to both the causes and consequences of disasters. From the perspective of the social creation of disasters, a natural hazard does not lead to a disaster per se, but when it intersects in a negative way with societal characteristics, a disaster is born. As such, there are different causes and in particular mixes of causes that result in disasters. In the case of Groningen, the earthquakes are not caused by natural processes, but by gas extraction conducted by humans. Moreover, the governance response to deal with the consequences of the gas extraction exacerbates the problem. The ISA disaster in Chiloé is also a specific example of the combination of natural and social processes through which the disaster is caused. The ISA virus was able to spread very rapidly and towards a big geographical area mainly due to an unsustainable exploitation of the ecosystem, lax regulation and low local governmental power and responsibility. Consequently, there were hardly possibilities to control the industry and collaboratively discuss about sustainable solutions for the industry, local society and environment. As to the consequences of disasters, in the case of Christchurch, there are different needs and wishes related to different temporal stages and geographical areas. To be concrete, some people in badly affected neighborhoods still lived in emergency situations, whereas others already regained their normal life. A disaster response by the government focusing on the future of the city center meant a mismatch with the realities of people that still lived in disaster situations. In the case of Groningen, people are not only differently affected in a physical way by the earthquakes, but also perceive the impact of the problems caused by gas extraction in a different way. Consequently, homogenous governance approaches for post-disaster recovery for all temporal stages and geographical areas are inadequate. Instead, a hybrid, multi-level and more flexible governance constellation would be more suited to capture the plurality of post-disaster needs, wishes and realities. Disaster governance: multi-level, hybrid and political  In this PhD research, governance was analyzed by zooming in into the level of institutions to capture the roles of different public, private and civil society actors and mixes between them. All three case-studies emphasize the importance of including the local level and social engagement in disaster (recovery) processes. The case of Christchurch shows that people want to participate in the reconstruction of their city. In Chiloé, the local government, NGOs and local communities claimed enhanced spaces of participation during the post-ISA period, as they have knowledge about the salmon farming situation which they want to share to improve the governance systems alongside the biotechnological upgrades. In the case of Groningen, people wanted to be engaged in the governance processes in order to have a say in the damage assessment of properties, policies for the future of the region and decision-making about the gas extraction. Moreover, a growing sentiment of distrust in the government only strengthens the call for participation. However, the uneven impact of the disasters made homogeneous governance responses pursued by the governments inappropriate. This applies as well to ‘one-size-fits-all’ approaches to include public participation in post-disaster governance. People in some situations might ask for a leading role of the government, whereas other situations might call for collaboration. Post-disaster governance should, therefore, be hybrid and able to take on flexible forms according to specific time and place needs. The maturation hybrid governance can therefore help to design tailored, time- and place-specific governance systems aiming for enhanced resilience and sustainability. Another important dimension of disaster governance is politics. Disaster politics influence the framing of disasters, the recognition of the scope of the problems, and debates around who can be held responsible for a disaster. In addition, whether a situation is recognized and officially labelled as a disaster (or not) is a highly political decision. Certain actors might have an interest in not labelling a situation as a disaster. For instance, the earthquakes in Groningen are not generally classified as a disaster. This is mainly due to the interminglement between public and private interests regarding the gas extraction. The rigid endurance of the entangled institutional set-up seems to block a transition towards an improved governance system that can seriously deal with the problems. Moreover, when the power of governments is questioned and when they are blamed for intermingling economic and corporate interests with the interests for the safety of the population, situations of distrust are difficult to avoid. All three cases of Christchurch, Chiloé and Groningen manifest the relationship between trust, politics and public participation. In countries such as New Zealand and the Netherlands, where trust in public institutions is relatively high, it is a huge risk and hard endeavor to regain trust in governance once it is lost. Governance can, therefore, be regarded as a double-edged sword: it can be a means to facilitate multi-level interactions and post-disaster transitions towards more resilient and sustainable societies. Yet, governance mismatches and mistakes in the institutional set-up can also be part of the cause and/or factors exacerbating a disaster. Post-disaster transitions towards resilience and sustainability Despite the destructive impact of disasters, places affected by a disaster are often supposed to be rebuilt in a more resilient and sustainable way. One important aspect of resilience is learning and hence post-disaster learning is crucial. Yet, many societies are repeatedly overwhelmed by disasters. The cases studied in this research show various reasons that explain why post-disaster learning processes do not necessarily lead towards societal and systemic learning, which is highly needed to facilitate adaptation and transformation towards enhanced resilience and sustainability. First, individual (groups of) actors can learn, but when these learning processes stay isolated and are not linked, systemic learning can be hindered. The case of Christchurch shows that public, private and civil society institutions did learn through all sorts of innovative post-disaster processes and activities. These initiatives range from special-purpose state institutions, to civil society initiatives to enable public participation in recovery processes. For instance, there were many bottom-up initiatives to keep people attached to the city and to experiment with sustainable practices. Also, the government launched a big public participation project as part of the recovery process. As such, ‘learning by doing’ was occurring. However, the learning experiences were not bridged and scaled-up towards wider governance improvements. Consequently, better linking and synergising learning processes amongst different levels is essential for enhancing resilience in post-disaster societies. Second, resilience of some subsystems of society might hinder the resilience and sustainability of the wider societal system. In the case of Chiloé, the approach pursued by the government and salmon industry to solve the problems caused by the ISA disaster was dominated by a biotechnological discourse. The solutions to stop the spreading of the virus and other diseases were restricted to chemicals, antibiotics and regulations for the salmon production, whereas the local government and population asked, for instance, for a devolution of government mandates to the lower levels. As such, biotechnological solutions were implemented to solve a much wider societal problem. Consequently, it can be argued that the strong biotechnological resilience of the industry hindered changes aiming for resilience of the wider system. The contradicting interests of different actors limited the installment of an institutional system to support wider societal transitions. Moreover, when resilience of some subsystems stays limited to sectoral adaptation and becomes rigid and inflexible, it can hinder transformation and the sustainability of the wider system and its governance. Resilience, therefore, needs to embrace the three concepts of learning, adaptation and transformation in order to contribute to sustainability. Third, post-disaster learning and (socially) innovative governance practices might create a fertile ground from which transitions can grow, but it often proves difficult to capture and use the post-disaster momentum to embed the experiences into institutional structures. The realities that disasters can uncover might be inconvenient for actors with an interest in the status quo. Prevalent governance systems in society might be examples of these realities and can be part of the reason why a hazard grew into a disaster in the first place. Consequently, the institutional system needs to be able and suited to embed the post-disaster learning processes. Chapter 1 and 6 present a classification of the elements in society that can lead to a disaster. This classification shows that having well-developed institutions is not necessarily enough to facilitate transitions, if these institutions, technical expertise and preparedness are not the most optimal for a specific kind of disaster. In addition, there can be a lot of technical expertise about a certain type of disaster in a particular area, but when a disaster hits another area, it is not that straightforward that the expertise is also present in this area. These aspects make that certain institutions can in fact contribute to the growth of a disaster, and thus need to transform to become adequate for an appropriate governance response. It is therefore highly important to create a governance system that has the ability to formally institutionalize (local) initiatives, governance processes and socially innovative practices. In sum, disasters can be a trigger for transitions towards enhanced resilience and sustainability, but the three processes above explain why post-disaster learning is often hindered instead of enabled. An integral understanding of disasters and governance can allow multi-level linkages and bridges between actors in different areas and from different disciplines, that are needed to enable societies to use disasters as a trigger to ‘build forward’ after a disaster. Policy recommendations  In the conclusions of this thesis, policy recommendations are presented for disaster governance. In sum, these are:1. Acknowledge the differences in impact of a disaster for different places and groups, and tailor governance responses to the more specific needs and wishes. This can mean to have a general view on disaster governance that applies to the overall disaster situation, which needs to be translated into more specific strategies tailored to particular places and/or people.2. Designate a situation as a crisis or disaster in a combined bottom-up and top-down way. This more adequate designation process does more justice to an acknowledgement of the perceptions, realities and problems of people, and encourages the design of governance systems in which governance roles are shared between a plurality of actors.3. Facilitate and enable the linking and bridging of learning experiences between different levels, through for instance informal and formal participatory meetings between (central) governments, businesses and local people. Also, encourage learning from other cases through international collaboration and policy-making.4. Approach disasters in a multidisciplinary way to include all aspects of disasters in disaster governance. Furthermore, enable integral besides domain-specific collaboration, as well as create a bridge between science and policy.5. Foster and use the concepts of resilience and sustainability as guiding compasses in their entireness. When transitions are not fully resilient or sustainable, these ambitions might rather contradict instead of complement each other. Ultimately, four forms of governance can be distinguished from the findings of this research: control, coordination, cooperation and collaboration. These forms relate to both the size of the role of different actors and to the kind of role. Control refers to low freedom for the private sector and civil society and much power to the state to decide on governance processes and actions. Collaboration entails working together and sharing responsibilities between the state, the private sector and the civil society. Cooperation means that the state has a leading role, but cooperates with the private sector and civil society. Finally, coordination refers to a style of governance in which the state coordinates between, and perhaps facilitates, the activities and roles of other state actors, the private sector or the civil society. Different people, geographical areas and time phases ask for different and hybrid forms of governance. Applying this categorization to the governance of disasters would aid in gaining a better understanding of disasters and for creating disaster governance systems to better deal with them. " "Rheological and Microstructural Investigation of Capillary Suspensions under Shear" "Erin Koos" "Soft Matter, Rheology and Technology Section" "The addition of small amounts of an immiscible secondary fluid to a suspension can lead to particle bridging and network formation. This is caused by the attractive capillary force due to liquid bridges formed between the particles. Thus, this kind of suspensions is called a ``capillary suspension''. The capillary bridging phenomenon can be used to stabilize particle suspensions and precisely tune their rheological properties. Capillary suspensions can be created whether the secondary fluid preferentially wets the particles or not. A pendular state is created when the secondary fluid preferentially wets the particles and a capillary state is obtained when the bulk fluid is preferentially wetting. In recent years, knowledge about the behavior and microstructure of capillary suspensions at the quiescence has been advancing. However, the dynamic behavior of capillary suspension is still poorly understood, despite such understanding being crucial for the processing. The objective of this thesis is, therefore, to investigate the rheological behavior and the correlated microstructure of capillary suspensions under shear. Capillary suspensions with a low particle particle concentration (φsolid=0.25), two different wetting properties, and various amounts of added secondary fluid were studied. The investigated samples have two different wetting conditions, characterized by the ternary contact angle θ, representing the capillary suspensions in the pendular and capillary states. Two central questions are the objective of this thesis. First, how does the microstructure of capillary suspensions change under shear? Second, how do the low level particle-particle interactions influence the rheological response in the asymptotic nonlinear regime? To investigate the network build-up systematically, a strain rate step measurement under decreasing shear rates was executed. The corresponding viscosity and normal stress differences were reported. In the pendular state, the system undergoes a transition from a positive normal stress differences at the high shear rates to negative normal stress at the low shear rates. This phenomenon occurs due to the motion of particle networks, where the hydrodynamic force dominates and the flocs break-up at high shear rates leading to shear thinning. The remaining dimers and trimers tumble in the flow-gradient plane (Jeffrey orbits) and experience friction as they come into contact due to the capillary force. The network reforms and the size of the flocs grows as the shear rates decrease. The long, asymmetric flocs rotate to reorientate in the vorticity direction leading to negative normal stress differences. Analogue experiments were also conducted for the capillary state. This system showed similar shear thinning, but only a negative normal stress difference. As confirmed with confocal microscopy, more networks are formed due to the droplet breakup at high shear rates in the capillary state.  Using oscillatory shear rheology, the asymptotic deviation from the linear viscoelastic behavior of capillary suspensions was studied. A non-integer power law stress scaling was observed as a function of imposed strain, in contrast to the expected cubical scaling reported in literature for most materials. Preliminary experiments with a capillary state sample provide the first definitive proof that such non-cubic scalings are not an experimental artefact. By comparing the results from this system with other reports for atypical scaling, I formulated a hypothesis that particle collisions, particularly in non-colloidal kinetically trapped systems, are the cause of the non-integer scaling. This hypothesis was tested with other samples where non-integer scaling can be traced back to Hertzian-like particle-particle contacts in the capillary suspensions networks. Furthermore, the contribution of concave (low contact angle) and convex menisci (high contact angle) as well as the secondary fluid concentration is discussed. While the experiments in this thesis were conducted specifically with capillary suspensions, the obtained results may be applicable for other material systems that exhibit strong attractive interactions."