Title Promoter Affiliations Abstract "Validating TRAINEE-Validating safety-critical railway control sofTwaRe updAtes IN thE fiEld" "Jeroen Boydens" "Distributed and Secure Software (DistriNet)" "The overall aim of this project is to enable software updates of the safety critical part of the Televic Rail (TRA) mechatronic control system, by demonstrating on board of the train that the software update is being performed correctly." "Forecasting and management of extreme rainfall induced risks in the urban environment." "Patrick Willems" "Hydraulics and Geotechnics, Materials and Constructions, Research Group Sustainable Development" "A. ContextExtreme local rain storms may induce severe floods and related socio-economic impacts on the urban environment (Belgian cities). While floods along rivers have been studied already extensively, quantification, forecasting, control and management of inundations along sewer systems and urban rivers have to face particular difficulties. They need fine-scale (local, short duration) rainfall estimation and nowcasting (= short-term forecasting in real time). They also require involvement of local authorities, which typically have low capacity in setting up risk quantification, forecasting, control and management systems.B. ObjectivesThis PLURISK project supported the local authorities in the quantification, forecasting, warning, control and management of pluvial floods. Methodologies and software tools are being developed for:(1) Nowcasting of fine-scale extreme rainfall, using advanced techniques for storm cell tracking and integrating national (C-band) and local (X-band) radar technology, numerical weather prediction (NWP); quantification of the uncertainty in this nowcasting.(2) Two-dimensional fine-scale modelling, mapping and nowcasting of inundations in urban areas, and quantification of uncertainties on these inundation quantifications.(3) Socio-economic risk quantification, incl. material and immaterial (social, ecological) damage assessment, quantification of risk perception (awareness), coping capacity and recovery capacity, impacts on built heritage, and uncertainty estimation on these impacts.(4) Risk communication and flood risk warning based on the nowcasting results; also extreme rainfall and lightning warnings will be addressed.(5) Risk reduction by both prevention/management, with particular focus on new management strategies, i.e. better interfacing between spatial planning, eco-management and urban water management (e.g. green – blue water; role of landscape architecture; restoration of biodiversity in urban areas incl. ecotechnologies on buildings and very dense urbanized areas, and considering the services of biodiversity for human population)While the nowcasting system for extreme weather conditions developed in (1) has been developed nation-wide (covering the whole Belgian territory), the methodologies and software tools developed in (2) - (5) have been tested and demonstrated for selected Belgian cities as case-studies. More specifically, the cities of Leuven and Gent (area of Oostakker and Sint-Amandsberg) were considered as case studies. For the Leuven case, the PLURISK project was linked to the RainGain Interreg IVB NEW project, which completed in 2015. The methodologies and tools developed within PLURISK are, however, applicable to any city or urban area in Belgium.C. ConclusionsThe main conclusions on the research activities and results are:For WP1 - Nowcasting of fine-scale extreme rainfall:A detailed international literature review was conducted on methods and experiences in radar based fine scale rainfall estimation. Methods were selected, advanced and applied to integrate the X-band with the C-band radar observations and with rain gauge data, in order to generate hybrid spatial rainfall composites with high resolution. The results show that for both the X-band and C-band radar data, the radar estimates could be greatly improved by all the adjustment procedures. The gauge-radar residuals however remain quite large, even after the adjustments. We moreover concluded that the adjusted X-band radar measurements are not always better estimates than the corresponding C-band measurements. Further investigation showed that the rain gauges and radars could simulate the spatially more uniform winter storms with almost the same accuracy. The results are different for summer events, where the added value of the radar data becomes more evident.Moreover, important progress was achieved with the implementation of the Short-Term Ensemble Prediction System (STEPS). The STEPS framework has shown flexibility to meet the scientific objectives of WP1 such as the non-conservative extrapolation of rain fields, uncertainty quantification of both radar observations and nowcasts, etc. STEPS-BE was set up for ensemble quantitative precipitation estimation and forecasting as well as the deterministic, probabilistic and ensemble verification of STEPS nowcasts. Additionally, the STEPS-BE software suite is able to run both in an operational environment on real-time data, as well as on archived data for case studies. Nowcasts were produced for selected storms and applied for testing the integration with the urban flood modelling in WP2 in the PLURISK case studies.The real-time mode of STEPS-BE is being used side-by-side with INCA-BE, the current operational deterministic nowcasting system at the RMI. A new lightning module was added to this deterministic system, and the performance of this module is continuously monitored in a real-time verification utility.Weather radar data were also prepared and delivered to WP3 in order to extend the multivariate depth-damage models with rainfall-damage models. They consist of monthly maximum hourly rainfall accumulations derived from the quantitative precipitation estimation product of the Wideumont radar.For WP2 - Two-dimensional urban flood modelling, mapping and nowcasting:For the Leuven case, a detailed sewer system model, implemented in InfoWorks-CS, was calibrated and validated based on in-sewer measurements of water levels and flow velocities. After that, the model was applied to simulate more than 50 rainfall events, including 10 extreme rain storms. For the Gent case, the sewer model is up and running as well, and 10 extreme rain storms were simulated. The rainfall inputs were based on the different radar adjustment and fine-scale rainfall estimation methods developed in WP1, to study the effect on the sewer flow and water level results.Based on these simulation results and applying the dual drainage approach, sewer surface inundations were modelled in a 2D way. Detailed and simplified full 2D versus 1D approaches were implemented and tested for that task. For the full 2D approach, different mesh resolutions were tested and the optimal resolution selected. To enhance the application of such 2D surface inundation modelling and mapping, simplified approaches – but identified and calibrated to the detailed approaches – were tested and implemented. This sewer inundation model has also been applied for simulation of the STEPS nowcasts provided by WP1 for selected storms, to obtain urban flood hazard maps to be combined with the urban damage functions obtained by WP3 to obtain urban flood risk maps.For WP3 - Socio-economic risk quantification:Based on a theoretical and analytical literature on flood damages and risks (functions), a specifically targeted questionnaire was distributed among flood victims. We managed to get the permission from the Belgian Privacy Commission to allow us to use the data from the Belgian Disaster Fund so that we could execute a large-scale written questionnaire. This created possibilities for innovative statistical analysis, which were completed. Two distinct types of multivariate flood damage models, “depth-damage” models and “rainfall-damage” models, were developed. Moreover, an intangible damage analysis was conducted with interesting findings. Also the potential damage to historic buildings and sites was explored. The “depth-damage” model was implemented for integration with the 2D sewer inundation depth results of WP2. The same was done for the “rainfall-damage” model, in view of WP4.For WP4 - Risk communication and flood risk warning:All PLURISK components (rainfall nowcasts, radar – rain gauge merged products, urban inundation model, socio-economic impact assessment approach) were brought together for the Gent case study to come up with a prototype integrated and multi-disciplinary urban inundation risk nowcasting system. By running the different STEPS-BE ensemble members, the rainfall nowcast uncertainty is propagated to the inundation hazard and risk results. The other uncertainties on the inundation modelling and socio-economic impact assessment are being added and propagated in that modelling chain. Different visualization approaches of the uncertainties were tested, which formed the basis for the urban inundation risk warning.For WP5 - Risk reduction by eco-solutions:A state of the art of concepts for eco-solutions and evaluation of such solutions was reviewed, which identified three main ""gaps"" in the literature. These include gaps reg. the definition of green spaces, the typology of green spaces, as well as the role of different types of green spaces in water regulation. The analysis of infiltration and runoff data collected from the literature highlighted a lack of consensus, showing the difficulty to evaluate the (function of) ecosystem services.Assuming that ecological processes / ecosystem services depend on the spatial structure of green spaces, this structure was analysed by calculating different indices of the landscape. Five cities in the same region were studied to identify possible similarities between spatial patterns. The total amount of green spaces for the five cities was found comparable to other European cities. The size of green spaces was found increasing along an urban-rural gradient. At the same time, their density decreases. We also find that the connectivity of green spaces increases and that the shape becomes more complex along the urban-rural gradient.Four categories of green infrastructures were identified as being useful in rainwater management, actively preventing flooding by reducing runoff. They are green roofs, bio-swales, infiltration trenches and rain gardens. Each one was developed in a specific technical leaflet. In parallel, study cases were analysed in Gent. The studied neighbourhood regularly flooded under heavy rains. A holistic analysis has been made from a spatial planning perspective to integrate the most appropriate green infrastructures in order to prevent flooding and to enhance the urban biodiversity.D. Contribution of the project in a context of scientific support to a sustainable development policyThe PLURISK project focused on:“Natural risks” related to the “extreme meteorological phenomena”: extreme rain storms, urban floods;“Areas at risk”, being the Belgian society in the urban environment, and the “material cultural heritage”;and considered the following chain to describe the risks:Hazards: probability that fine-scale extreme high rainfall (that might induce urban flooding) will occur (variability) at a certain intensity, time and given place; and highlighting precursory signs (based on NWP), conditions that will induce urban floods (sewer hydraulics), as well as the potential aggravation of risks caused by a combination of other hazards and site effects (high downstream water levels). Uncertainties as regards to the fine-scale extreme rainfall hazards were reduced through different types of rainfall-based data sources, and explicitly quantified.Vulnerability: identification and evaluation of the impacts on and potential damage to, as well as potential resilience of, at-risk areas. The research in particular took into account the multiple socio-economic and environmental factors which determine or influence such vulnerability.Scientific support for managing risk: this involved i) evaluating risk based on the integration of scientific knowledge of the risk and the vulnerability in question; ii) analysing measurements of risk management while looking for a balance between measures to ensure early detection, prevention, impact limitation and restoration to support reduction or adaptation of the risk; and iii) analysing perception of the risk, concerns and values of society in order to suggest how to manage risk in a way that is acceptable to society and how to communicate this in a suitable manner.The proposed research thus was multidisciplinary and systemic, and covered all three elements of the considered risk chain.The PLURISK project more concretely aimed to support the local authorities in the quantification, forecasting, warning, control and management of pluvial floods. This aim was reached through the development and testing of methodologies and software tools for:Nowcasting of fine-scale extreme rainfall, using advanced techniques for storm cell tracking and integrating national (C-band) and local (X-band) radar technology, and numerical weather prediction; quantification of the uncertainty in this nowcasting (WP1);Two-dimensional fine-scale modelling, mapping and nowcasting of inundations in urban areas, and quantification of uncertainties on these inundation quantifications (WP2);Socio-economic risk quantification, incl. material and immaterial (social, ecological) damage assessment, quantification of risk perception (awareness), coping capacity and recovery capacity, impacts on built heritage, and uncertainty estimation on these impacts (WP3);Risk communication and flood risk warning based on the extreme rainfall and urban flood nowcasting results (WP4);Risk reduction by both prevention/management and real-time control actions. New management strategies were developed by better interfacing between spatial planning, eco-management and urban water management (e.g. green – blue water; role of landscape architecture; restoration of biodiversity in urban areas incl. ecotechnologies on buildings and very dense urbanized areas, and considering the services of biodiversity for human population) (WP5)." "Global identification methods for linear parameter varying (LPV) systems" "Jan Swevers" "Production Engineering, Machine Design and Automation (PMA) Section, Robotics, Automation and Mechatronics (RAM)" "Advancement in technology goes hand in hand with advancement in science and engineering. The science and engineering communities never stop revising and upgrading their methodologies so as to keep up with the global quest for continuous improvement in terms of efficiency of industrial processes and quality of life in general.The control engineering community, in particular, aims to ensure robust and high-performance control systems. The central, decision-making element of control systems is a controller: historically a device, nowadays a software implementing the laws of control theory and the insights about the system being controlled, or in other words, “steered” towards a desired behavior. Many methodologies for controller design require a system model: a component that, using mathematical language, embodies the laws of nature governing the behavior of the system, and enables a prediction of its reaction to certain excitations. The process of deriving mathematical models based on measured system data is in control engineering addressed as system identification. By virtue of its broad applicability and the diversity of the systems encountered in practice, system identification is a lively research field. Furthermore, the increasing requirements on the performance of control systems ask for more accurate models and hence, more dedicated modeling strategies, which results in methods intended for specific classes of systems.This thesis’ field of application are industrial mechatronic systems that perform fast and highly precise positioning and manipulation of a load or end effector, a task of great importance in industrial automation. They typically fall into the category of nonlinear systems, a class of systems often intricate to identify and to control. The research subject of this thesis is identification of the aforementioned nonlinear systems within the linear parameter-varying (LPV) framework. The LPV framework enables approximate description of certain nonlinear dynamics with a model that holds a linear relation between the system inputs and system outputs by accommodating its nonlinearities in a suitable dependency on one or several distinctive signals called scheduling parameters. The goal is to provide a practical and automated modeling framework for the estimation of LPV models that are of low complexity, compliant with and favored by modern LPV control design methods.While literature on LPV system identification distinguishes between methods that use data originating from experiments with varying scheduling parameters, and the methods that use data obtained in experiments during which the scheduling parameters are fixed, this thesis proposes an approach that combines the two data types. Furthermore, the data can be given in time domain, frequency domain, or both. Numerical and experimental validation showed the versatility of the approach in its ability to capture the system dynamics. The considered experimental setup was a lab-scale overhead crane system.The aforementioned approach was taken as the foundation for the research that followed and that referred to the determination of an adequate dependency of the model on the scheduling parameters, a task in LPV system identification recognized as highly challenging. A too simple, or in other words, incomplete dependency of the model on the scheduling parameters leads to a structural bias, whereas a too complex dependency results in overfitting and complicates consequent LPV control synthesis and analysis. This dependency must always be predefined, on the one hand. On the other hand, there is usually very little to no insight about the internal functional dependencies of a potential LPV model on the scheduling parameters. This thesis therefore took the challenge of searching for the most appropriate functional dependency yielding accurate yet simple LPV models, in a self-contained way. Two, to a high extent independent approaches that allow to control the complexity of the model’s dependency on the scheduling parameter were explored. Both are based on the principle of applying L2,1 -norm regularization to the estimation problem, but they differ in the way they assess the importance of the model parameters. One approach “judges” the parameters according to their contribution to a state-space matrix and consequently, system’s response, whereas the other takes the advantages of B-spline functions and translates the search for an optimal model dependency on the scheduling parameter into the search for the optimal number and locations of the knots. Numerous numerical and experimental validations show the potential of the developed methods in finding a trade-off between the model accuracy and simplicity. The considered experimental cases were a mechatronic XY-motion system and an industrial ball screw setup, both challenging due to high levelsof nonlinear distortions induced by friction and backlash.Finally, the methods based on B-splines are developed for Linear Control Toolbox, a freely available MATLAB-based software package primarily focused on optimal feedback controller design for LTI and LPV systems. " "Development of biomechanical algorithms in health assessment and of a business plan for future establishment of a service platform for CHaT (Center of Health and Technology)." "Steven Truijen" "Movement Antwerp (MOVANT)" "Society 5.0 is a vision in which emerging technologies are used as a tool in health. In Medicine 4.0 smart health is required for innovation. The multidisciplinary interfaculty institute CHaT (Center for Health and Technology), has already more than ten years of experience in methods and algorithms developed using optoelectronic marker-based measurement systems for biomechanical movement and gait analyses. The main drawback of this optical technology is the use of passive markers because it is time-consuming to place the markers on the skeleton of the subject. In addition, occlusions are common since each marker must be detectable by at least 2 cameras at any time in order to be correctly interpolated. To solve this, a new 4D4A scan lab (4D scanner for Accelerating Advanced Motion Analysis and Application), that is unique in the world was recently installed in CHaT with the goal of opening new frontiers in Medicine 4.0, elevating the University of Antwerp to one of the top universities in the world for Human Modelling and Simulation in Medicine. The 4D4A scan lab offers the possibility to capture the human body in 4D (3D + time) without the use of markers so that close contact is no longer required. This offers some opportunities such as measuring in COVID time and a free natural movement of the test person due to the absence of annoying sticked markers. With an accuracy of less than 1 mm, 178 3D images are recorded per second and processed automatically. In addition, the 4D4A scan lab can extend sparse skeletal movements and static 3D geometric information into accurate new dynamic 4D body measurements such as an arm or abdominal circumference over time. This now opens new applications, not only in basic or clinical research, for example monitoring gait and balance in stroke patients, but also in many industrial sectors, such as biomedical engineering and industrial ergonomics. Before the 4D4A scan lab can be offered as a service to industry, hospitals and research institutions, a number of bottlenecks still need to be addressed: 1) The initialisation in the 4D4A scan lab: preparation, coordination and general management of the 4D4A Lab activities; 2) Simultaneous 4D motion tracking and reconstruction of 4D digital modelling of the human shape, complex geometry, and soft tissue synchronizing different devices; 3) Biomechanical and ergonomic analysis of 4D dynamic body metrics by developing specific algorithms for processing scripts and formats to visualise and analyse with free software packages e.g., Blender (blender.org) and OpenSim (simtk.org/projects/opensim); 4) Validation of the developed algorithms protocols, products, and devices to meet the specific demand from the industry; 5) Development of a business plan for future creation of a service platform for CHaT (Center of Health and Technology) in order to transform 4D high-tech knowledge into a service platform, creating new collaborations resulting in services, patents and spin-offs." "Machine learning for failure identification and reliable mitigation of rotor eccentricity in electrical drives" "Davy Pissoort" "Mecha(tro)nic System Dynamics (LMSD), Distributed and Secure Software (DistriNet), Waves: Core Research and Engineering (WaveCore)" "Mechanical vibrations are the curse of electrically driven mechatronic systems, like robots, autonomous vehicles. They cause those systems to degrade and result in lower-quality processes and products, higher costs, more extended system downtimes and reduced reliability. Studies show that on average 5 % of production capacity is lost due to unplanned shutdowns, of which nearly half (46 %) are the result of mechanical failures (source: Marsh & McLennan). This all leads to an approximately 50% higher cost to fix the problem after the event has occurred (source: U.S. National Response Center). Moreover and what is worse, for electrically driven mechatronic systems, initial vibrations caused by imperfection or fault at one location can easily lead to significant additional vibrations at many other locations. This deteriorates not just the already faulty component, but also the other components in the system, leading to many opportunities for premature failures. However, we firmly believe that these amplified vibrations and their adverse effects can be avoided by proper condition monitoring and control algorithms. Unfortunately, there are currently no solutions available to actively dampen the amplification of such initial vibrations in electrically driven mechatronic systems without additional hardware. Existing commercial monitoring systems are only able to detect vibrations. Moreover, commercial monitoring systems are completely black-box, which hinders machine specialists when it comes to improving the vibration behaviour by tuning the machine controls. The costs of such commercially available monitoring systems are also prohibitive for permanent installation at large scale. On the other hand, motor drives in mechatronic systems already comprise current sensors, which could allow detecting mechanical vibrations when using an appropriate algorithm. This brings up the fundamental question if this information can also be exploited to adapt the control strategy of the mechatronic systems to dampen vibrations without adding any extra hardware actively. However, the challenges that need to be overcome to achieve this should not be underestimated and are summarized below. So far, control strategies used in motor drives have focused mainly on creating torque, angular speed/angle of the rotor and brake energy regeneration. While some control strategies try to minimize the unwanted induced tangential vibration on the rotor (caused by torque variations), none of them considers controlling the amplification of radial vibrations (initiated by mechanical faults). At the basis of that is the fact that control strategies generally assume that the rotor is always perfectly centered to the stator. Under these ideal conditions, radial forces generated on the rotor and stator cancel each other, so that they do not have to be taken into account in the control strategies. Unfortunately, radial vibrations due to e.g. bearing related faults or unbalance, lead to non-concentric rotation of the rotor. This results in (i) unbalance that has to be counteracted by the bearings in the motor and the stator, but also in (ii) an unequal internal radial force distribution, which can amplify the vibrations. When this combination becomes unstable, additional vibrations and failure rates increase very rapidly. The motor current should consist of additional harmonics due to the time-varying centre rotor position, which should be monitored and be used as an indicator for unwanted vibrations. The 'Motor Current Signature Analysis' (MCSA) technique made use of this principle. However, the current state-of-the-art in this field only allows to detect such unwanted vibrations and relate them to possible causes but not to dampen or eliminate them. In this PhD proposal, we want to take this a significant step further by developing an active-damping strategy that hinges on the hypothesis that a motor control can be effectively applied to reduce radial vibrations inside an electrically driven mechatronic system." "Exploiting metadata, ontologies and semantics to design/enhance new end-user experiences for adaptive pervasive computing environments." "Patrick De Causmaecker" "Computer Science, Kulak Kortrijk Campus, Comparative, Historical and Applied Linguistics, Kulak Kortrijk Campus, Informatics Section" "Adaptive Systems and Pervasive Computing change the face of computing and redefine the way people interact with the technology. Pioneers pursuea vision that technology is seamlessly situated in peoples life and adapts itself to the characteristics, requirements, and needs of the usersand the environment without any distraction at the user side. Adaptive Systems research mostly focuses on individual applications that can alter their interface, behavior, presentation etc. mainly with respect to the user characteristics and needs. The pervasive computing vision considers adaptivity as a relation between the computing setting and the context rather than a one-to-one relation between user and application. Therefore, it enlarges the source of adaptation from user characteristics to abroader notion, which is context. In this respect, emerging context-aware systems try to utilize the characteristics, requirements etc. of entities relevant to the computing setting while including the user as a core entity. Context is an open concept and collected contextual information is often imperfect. Therefore, the development and management of largescale pervasive and adaptive systems and adaptation logic is complex and challenging. Several researchers tried to remedy this situation by providing middleware support, approaches, and methods for dealing with imperfectness, context modeling, management, reasoning etc.  However, due to the openness of contextual information, on the one hand, it is almost impossible to enumerate every possible scenario and to define or mine adaptation rules. On the other hand, absolute machine control is not always desirable considering the intellectual characteristics of the end-users. The aforementioned criticism suggests that efficient software development approaches and means to enable end-users to reflect on their own state of affairs are required. In this thesis, we address these issuesat an individual application level and at a collective level. The former deals with development and end-user interaction issues on the basis ofindividual applications providing adaptive experiences while the concern of the latter is on the basis of distributed applications serving end-users in concert. For each level, we provide a number of conceptual and practical contributions, mainly applied to the e-learning domain. Regarding the individual application level, we utilize an approach based on high level abstractions, particularly ontologies, aiming at facilitating the development and management of adaptive and pervasive systems. Ontologies are used for the acquisition of domain knowledge and semantics at the first stage. Afterwards, the goal is to use the resulting ontology fordynamic adaptations, end-user awareness, intelligibility, software self-expressiveness, user-control, and automated software development with aModel Driven Development perspective. Our contribution is mainly conceptual at this level. We first review the main notions and characteristicsof Pervasive Computing and Adaptive Systems along end-user considerations. We criticize the pervasive computing vision with a user-centric perspective and elaborate on the practical body of existing literature, intersecting Knowledge Representation, Logic, and the Semantic Web, to arrive at a uniform development approach meeting the aforementioned concerns.Regarding the collective level, we focus on personal and pervasive environments and investigate how high level abstractions and semantics, varying from generic vocabularies and metadata approaches to ontologies, canbe exploited for the creation of such environments and to enrich and augment the end-user experience. Our contribution is built on the conceptual and practical approach that we derived earlier.  We envision encapsulating digital and physical entities having digital presence in the form of widgets and enabling the creation of web-based personal environments through widget-based user interface mashups. For this purpose, we first address the widgetization of existing applications, in a broader sense in terms of ubiquitous web navigation, through harvesting semantic in-content annotations from application interfaces. An ontology-driven development approach allows automated annotation and generation of user interfaces. We introduce specifications and mechanisms for annotation, extraction, and presentation of embedded data. We introduce a set of heuristics to exploit domain knowledge and ontology metadata, with ontologicalreasoning support, for generating user-friendly navigation experiences.Thereafter, we introduce an open and standard widget platform, an interoperability framework, and methods for manual and automated widget orchestration. We introduce public widget interfaces and employ the semantic web technologies, particularly embedded semantics and ontologies, to address data and application interoperability challenges. We build an end-user data mobility facility on top of the proposed interoperability framework for user-driven manual orchestration. We propose a method for mining user behavioral patterns from the user logs. The method is based on the adoption of workflow mining techniques, for extracting topology, and multi-label classification techniques based on a label combination approach, for learning the routing criteria. We exploit harvested patterns fordemand-driven automated widget orchestration. We compare our approachesand methods with a broad interdisciplinary literature. We provide prototypes for each practical contribution and conduct end-user experiments and usability assessments to prove the computational feasibility and usability of the proposed approaches and methods." "European Integrated Research Training Network on Advanced Cryptographic Technologies for the Internet of Things and the Cloud." "Bart Preneel" "Computer Security and Industrial Cryptography (COSIC)" "SummaryThe goal of this ETN is to develop advanced cryptographic techniques for the Internet of Things and the Cloud and to create implementations that offer a high level of security and increased usability, for a wide range of physical computation platforms. The ITN will equip a group of 15 early stage researchers with a set of interdisciplinary skills combining mathematics, computer science and electrical engineering that will allow them to create advanced cryptographic solutions that will be available for commercial applications. The 8 beneficiaries (including 2 companies) are leading research teams in the area of applied cryptology with a strong track record of collaboration; it is complemented by 7 partner organisations from industry (including 2 SMEs). The training from the fellows will be guided by a personal development plan. A central component is training by research supported by an intensive program of workshops, summer schools, seminars, research visits, and secondments. The training will be complemented with transferable skills that also support the transfer of research to an industrial context. The management structure of the project is built on a pro-active approach with responsibilization of the fellows. The dissemination and outreach of the project activities target a broad range of stakeholders. The ITN contributes to the ERA by helping to overcome the fragmentation in the area of applied cryptology. The research supports the trust and security component of the Digital Agenda for Europe and responds to the growing attention of EU policy makers for societal needs related to privacy and cybersecurity. The societal relevance and timeliness of this research has been emphasized by the revelations made by Snowden, that provide clear evidence of mass surveillance by nation states and of serious weaknesses of our current infrastructure. An essential component of aresponse to these revelations consists of a broad deployment of advanced and innovative cryptographic techniques.Research AreaWe are in the midst of an evolutionary change with respect to computing and information technologies. Whereas in the first decades of the IT revolution data was mostly kept in private storage or at most company-level networks, in recent years there is an increasing trend towards outsourcing storage of data to large corporate servers, the cloud. That is, instead of purchasing actual physical devices (servers, storage, network), clients rent these resources from a service provider. Storing data in the cloud has a number of advantages. The data can be accessed anytime, from everywhere, and from different devices. These devices can be a home PC, a modern smart phone, an external service provider performing computations on the data such as an e-commerce application running in the cloud, or even sensors collecting continuously new data. This new computation paradigm leads to a network of servers in the “back-end” which is invisible to the user. By sharing the resources among their clients, the cloud provider can offer a lower per-unit price and hence offer economic benefits compared to single-client solutions as well as an increased elasticity. These new applications bring important security and privacy challenges. While experts have been warning for these risks from the start, the revelations of Snowden starting mid 2013 have brought these issues to the spotlight: cloud environments and the networks interconnecting clouds form an attractive target for mass surveillance. In addition, data stored in the cloud has been abused, either by cloud service providers or by hackers. Users are at the mercy of their storage providers with respect to the continued availability of their data or with respect to where data is processed. For secure outsourcing of computation, users may not have any guarantees that the computation has been performed correctly or that it has not leaked important private information about the data. Hence, a secure cloud environment should provide strong guarantees of the users’ privacy and the integrity of their data and computation. Despite the far-reaching implications that follow from this new IT landscape, cloud security has been studied mostly from a risk management, system and network security perspective; the deployment of strong cryptographic protection in the cloud has been very limited because of their overhead; moreover, in spite of recent progress there is still no practical solution to performing efficient and verifiable computations on encrypted data. Only very recently the potential for the use of cryptographic techniques in the cloud are crystallizing and a number of exciting research challenges have emerged.While cloud computing increases the scale of our ICT infrastructure, we are in the midst of the development of the Internet of Things (IoT) (which is related to pervasive computing, ubiquitous computing, disappearing computing). It is expected that by 2020 50 billion devices will be connected to the Internet. We are currently experiencing the mobile revolution, powered by sophisticated smart phones that have an increasing number of sensors and communication modes. In the next 5 to 10 years, there will be many more (exciting) pervasive applications with strong need for security that are based on a growing number of small processor or nodes (e.g. RFID tags, sensor nodes), leading to cyberphysical systems. Examples include medical implants that communicate on a permanent basis and are upgradeable, car-to-car communication, smart grids, smart factories and smart buildings. Many of these applications will need innovative security solutions. For example, medical implants will typically need bidirectional communication for relaying of sensor data and for receiving software updates. Obviously strong access control and privacy mechanisms are required. Environmental restrictions such as low- power, low energy and upgradability also have to be taken into account. It is also obvious that the Internet of Things will interact with the clouds, which will create new security and privacy challenges. While there has been extensive work on security protocols and solutions for RFID tags and IoT, there are still major research challenges: there is a need for cryptographic algorithms that improve the energy or power consumption by an order of magnitude compared to current solutions. Moreover, sound and efficient protocols need to be developed that are both privacy-friendly and that can corroborate the location of the device using distance bounding techniques.ExpertiseIn order to develop cryptology for cloud and IoT, we also need to develop secure hardware and software implementations of cryptographic components. Many cryptographic primitives, when actually implemented and deployed, fail to achieve satisfactory levels of usability; for many applications, the overhead induced by cryptography is still too large. In addition, scores of vulnerabilities have been identified in implementations of cryptographic algorithms. Finally, over the last decade the insight has grown that there is a big gap between the mathematical model of a cryptographic algorithm and the physical reality – many physical implementations can be attacked by exploiting physical phenomena such as execution time, power consumption, electromagnetic radiation or the response to the injection of faults. By now it is clear that the impact of physical attacks on implementations is much larger than anticipated. There is a strong need for novel algorithms that are easier to protect against such attacks and for implementations that offer a high level of security and increased usability, and that address deployment issues for a wide range of physical computation platforms.A crosscutting concern for all these areas is the development of quantum computers, which promise to offer an exponential growth in computational power with the increase in number of parallel processors. Quantum computers are most efficient when dealing with highly structured algebraic problems such as those used in public-key cryptology. All widely used public-key cryptosystems are based on a small set of problems from algebraic number theory, namely factoring and discrete logarithms. Public-key cryptography as implemented today will become completely insecure in the event that large quantum computers become a reality. While experts remain divided on the time scale, the probability that large quantum computers are available in the next 10 to 20 years is certainly non-negligible.3 As an example, documents leaked by Snowden have shown that NSA has a program with a budget of 80 million US$ to build a quantum computer. As most of our cyber infrastructure relies on public-key cryptography, this could lead to a devastating scenario. In order to avert such a future catastrophe, it is extremely important to design cryptosystems based on new paradigms that can resist quantum-computer attacks such as lattice-based algorithms. This requires a long-term research effort that takes into account industrial requirements. Currently this area is under-researched because it falls beyond the 3-5 year window of industrial research while the academic research community is aware of the problem for 20 years; while some ideas have been developed, at this stage no mature solution is known that would be ready for deployment in the next 5-10 years.ApproachThe research work is structured into three WPs, which deal with cryptography for IoT, cryptography for the Cloud, and Physical Security, Usability, and Deployment. WP1 focuses on the cryptography for IoT. Many of the cryptographic mechanisms proposed in the 1990’s were designed for PCs and server platforms. Moreover, many symmetric-key algorithms were designed with large security margins in the hope that this would strengthen the algorithms against attacks that were not known at design time. Moreover asymmetric-key algorithms of the time were complicated to describe mathematically and are expensive in terms of code size, execution time, etc. In the mid 2000’s, the emphasis switched to lightweight designs. Here, the main design criterion is to minimize the consumption of power or energy per encrypted bit. Consequently, new designs come with a lower security margin. At this moment there is a rather scattered design space, with a lack of a systematic design approach that can offer deeper insights into the trade-offs between area, power, energy and security. This would be essential if one wants to be confident in a design that improves the existing security/performance trade-offs with one order of magnitude. The objectives of this WP are: firstly, to study the security of existing lightweight algorithms, and secondly, to design new lightweight algorithms and protocols. Since implementation cost is an essential concern for lightweight designs, there will be a close interaction with WP3. In addition, we will study how new algorithms can be designed that can afford simple and efficient protection methods against side-channel attacks, rather than designing first the algorithms and then the countermeasures. The usage of cloud storage pleads to new challenges in IT security and privacy that have to be solved before the new technique can be widely adopted, e.g. for personal sensitive data. WP2 (New Challenges in Cloud Computing) has as objectives the study of security of algorithms that provide protection of the users' data (confidentiality) in the cloud as well as their identities (unlinkability), by still allowing them to perform useful operations on the data (remote computation). There exist partial and theoretical solutions to these challenges, but these are non-practical in the sense that they are inefficient or provide only limited security. Thus, the research objective is to design more efficient solutions that are directly applicable in real-world cloud settings. Cryptographic primitives that are implemented and deployed and actually used almost always turn out to be breakable by attacks that target the implementations rather than the primitives per se. For example, tens of thousands of Internet public keys that had been generated from bad randomness are broken. AES-CBC in HTTPS is broken using side-channel information leaked by the timing of decryption operations, even when AES is implemented in hardware; an almost correct, but not always correct implementation of ECDH in OpenSSL is broken. Paper after paper at the CHES conference series8, now the largest yearly cryptography conference, has demonstrated ways to break cryptographic systems through side-channel analysis. These side-channel attacks are particularly troublesome for a diverse range of security tokens such as the TFL Oyster card, Barclays PINSentry, and RSA SecurID; security tokens usually carry payloads related to identity or significant monetary value, and are used within uncontrolled physical environments that are easily accessible to attackers. A common theme of the research in WP3 (Physical Security, Usability, and Deployment) is the large, often devastating, gap that separates physical reality from these mathematical models. WP3 will explore cryptographic security, usability, and deployment issues for a wide range of physical computation platforms. These platforms range from busy, centralized, high-end server clusters through much smaller mobile and embedded devices down to the tiniest sensor nodes. WP3 covers not only today's most popular cryptographic primitives and protocols but also post-quantum cryptosystems to protect users in the future. Interactions and collaborations between the research work packages: WP1 and WP2 design new cryptographic primitives respectively for IoT and for the Cloud. WP3 evaluates and selects cryptographic primitives from a real- world perspective, and provides feedback to WP1 and WP2 accordingly. WP3 collaborates with WP1 and WP2 on interactions between design issues and usability issues, especially efficiency. WP1 and WP2 are responsible for building confidence that the mathematical outputs of their cryptographic primitives are incomprehensible to an attacker based on traditional, non-physical, cryptanalysis. WP3 analyzes and defends against the gaps between mathematical outputs and physical outputs. As above, WP3 provides feedback to WP1 and WP2, and collaborates with WP1 and WP2 on interactions between usability and design." "Towards a knowledge-based task specification method for autonomous robot situation awareness" "Nikolaos Tsiogkas" "Robotics, Automation and Mechatronics (RAM)" "Situation awareness is the general state of knowledge an agent has about its environment that is rellevant to the currently executed task. It is achieved by actively sensing relevant information in a process called situation assessment. Both situation awareness and assessment have been studied for humans executing tasks. When it comes to autonomous robots, situation awareness and assessment is usually implicitly encoded in the control software of the robot, and is created manually by the human that engineered that software. This approach does not allow robots to react to situations that where not explicitly coded in their software, nor it allows them to execute new tasks that the user of the system may want to execute. To overcome this problem, a generic method for understanding the current situation and gathering information about it would be required. Given that the situation awareness and assessment is always relevant to the currently executed task, the first step towards understanding the situation is having a task specification that would encompas information about the situation awareness. This work aims at providing such a task specification, along with software to reason on the situation awareness requirements of the given task." "Towards Deeply Reconfigurable, Long-life Internet of Things Platforms" "Danny Hughes" "Informatics Section" "Our world is racing towards a new era of smart applications, examples of which include: Smart Agriculture, Smart City, Smart Manufacturing, and Smart Traffic. These new applications have great potential in contributing the society, industry, and our daily lives. Internet of Things (IoT) provides the necessary technologies to support such smart applications by connecting the digital and physical worlds. According to market predictions, there will be more than 20 billion IoT devices connected by the year 2020.While the potential is clear, developing and deploying an IoT network for an application remains onerous: 1) The development cost is high, and mostly ad-hoc. It is hard to reuse existing systems outside of specific applications. 2) IoT platforms are not flexible. It is costly and time consuming to customise or reconfigure them. 3) IoT end devices suffer from limited battery lifetime. This increases the cost of maintenance and limits the use cases when an application requires long term deployment.This thesis tackles these problems through targeted contributions in this research field at the system software level.  The first contribution is μPnP, it tackles the flexible device reconfiguration problem by introducing Plug-and-Play peripheral integrations into IETF class-1 devices. Combined with a low power identification hardware, a platform-independent software driver, and a standard based peripheral data interaction model, μPnP system significantly reduced IoT development effort, and achieved a true Plug-and-Play reconfigurable platform.The second contribution extends μPnP with flexible adoption of a range of low power networks. We selected 2 representative networks: SmartMesh IP and LoRa in our implementation. SmartMesh IP represents a range of meshed network based on Time Synchronous Channel Hopping (TSCH) technique, it forms a meshed network which returns great reliability. LoRa is one of the Low Power Wide Area Networks (LPWAN), it uses star topology, but achieves multi-kilometres of coverage in urban area. Neither of these networks fits in every IoT scenario, and thanks to our contribution, application developers can flexibility choose networks upon usage.The third contribution looks at the power problem of IoT devices. Energy harvesting is considered the most promising technique for extending IoT device lifetime. However, unpredictable environment energy budget, dynamic runtime power consumption, and complicated energy-aware software management together make a unavoidable barrier. This contribution presents AsTAR, a hardware software system supporting developing applications on energy harvesting based platforms. AsTAR uses adaptive scheduler to dynamically control the execution of tasks, in order to always maintain an optimum charge. The technologies developed in this thesis are now being further evaluated in a wide and growing range of industrial applications. Part of the contributions introduced in this dissertation have been commercialised by VersaSense, a KU Leuven spin-off company." "Tiny Thin-film Filters from a Different Angle: Correcting Angular Dependency for Spectral Imaging" "Chris Van Hoof" "Electronic Circuits and Systems (ECS)" "Spectral imaging is a technology that combines photography andspectroscopy by sampling the electromagnetic spectrum for each point in the scene. This spectrum can be used as a ``fingerprint'' to identify different materials and analyze their properties.Recent developments have enabled the integration of thin-film filters onto pixels of an image sensor, making it possible for pixels to sample specific wavelengths.Furthermore, the filter-on-chip integration makes spectral cameras robust, compact, and lightweight enough to be used in various applications including remote sensing, industrial quality control, healthcare and precision agriculture.However, to enable mass production and widespread adoption across industries, it will be essential to minimize costly and time-consuming expert support for camera calibration.One of the key concerns in calibration is the angle-dependent transmittance of thin-film filters.When illuminated by focused light, the filters might select other wavelengths than originally intended, causing the measured spectra to be smoothed and spectrally shifted. This can strongly impact application performance and hence also limits the selection of compatible lenses.Hardware solutions include the use of filter materials that are less angle sensitive, expensive telecentric lenses, or the development of lens-specific filter designs.These solutions cannot always be implemented due to a lack of compatible materials or budgetary constraints.Instead, we propose a software correction approach that enables cost-efficient use of the same sensor with most commercially available non-telecentric lenses.In this regard, this work has two major contributions.The first contribution consists of new and insightful analytical models that can be used to predict the transmittance of thin-film filters illuminated by camera lenses. The models cover arbitrarily tilted circular apertures and lenses with vignetting.Using analytical approximations predicated on classical thin-film theory, spectrally shifted measurements can be corrected in a practical way. As a result, it does not matter anymore which lens is used as long as the shift can be predicted and other distortions are negligible.The second contribution is the derivation of a novel thin-film filter model which takes into account the finite width of integrated thin-film filters. This ``tiny filter'' model predicts significant deviations from classical thin-film theory which assumes an infinite filter width.The predicted deviations are observed in multiple spectral cameras and result in the formulation of new limitations for filter miniaturization. Furthermore, the model might be used to build software tools to efficiently design tiny filters or perhaps avoid unnecessary investments.In conclusion, the developed software corrections expand the selection of compatible lenses and simplify the calibration process as long as classical thin-film theory applies.When this is no longer the case, no correction methods currently exist. The development of such methods is therefore suggested as a topic for further investigations. "