Open Theses

The following list is not exhaustive. There is always some student work to be done in various research projects. You can send an email, asking for currently available topics.

Abbreviations:

  • BA = Bachelorarbeit, Bachelor's Thesis
  • MA = Masterarbeit, Master's Thesis
  • IDP = Interdisciplinary Project
  • GR = Guided Research

MA: Power-Performance Optimization for HPC Systems with Emerging Memory Technologies

 

Background:

High Performance Computing (HPC) systems are facing severe limitations in both power and memory bandwidth/capacity. By now, both limitations have been addressed individually: To optimize performance under a strict power constraint, Power Shifting, which allocates more power budget on the bottleneck component, and Co-Scheduling, which launches multiple jobs on one single node, are promising approaches; for memory bandwidth/capacity increase, the industry has begun to support Hybrid Memory Architecture that utilizes multiple different technologies (e.g., 3D stacked DRAM, Non-Volatile RAM) in one main memory.

Approach:

In this thesis, you will look at the combination of both technology trends and develop one (or both of) the following techniques: (1) Footprint-Aware Power Shifting and/or (2) Footprint-Aware Co-Scheduling. Both ideas are based on the same observation: in spite of the system software's efforts to optimize data allocations on a hybrid memory based system, the effective memory bandwidth decreases considerably when we scale the problem size of applications (e.g., using finer-grained or larger-scaled mesh models). As a result, the performance bottleneck changes among components depending on the footprint size, the memory consumption of the executed application, which then significantly affects the power-performance optimization strategies such as Power Shifting and Co-Scheduling. You will provide a basic solution for this emerging problem and develop a framework to realize it.

 

More Information: Download

Contact: Eishi Arima, Carsten Trinitis

Dai Yang

BA/MA/GR: Modeling and Characterizing HPC Cluster Availability

In a current project at our chair, we are analyzing modern High Performance Computing (HPC) systems with heterogeneous architectures towards exascale computing. Major challenges in exascale computing include an increasing number of nodes, dynamic resource allocation and organization, and fault resilience. In this Thesis/Project, a model of failure for HPC Systems should be developed and the availability of HPC Systems shall be quantified.

Work Packages

• Literature Research on Availability Modelling in Cluster/Grid/Cloud/HPC Systems
• Modelling of Availability for large-scale LRZ systems, such as Linux Cluster and SuperMUC
• Model verification 

Full-Description

Contact: Dai Yang

MA: Scalable Clustering of Large Scale Sensor Data

 

Description:

Data Mining is an important tool to get useful information from huge data sets in complex systems; especially industrial systems are equipped with many sensors that record the conditions and states of the machines as time series. This data can be used to protect the assets from the failure and also help finding better operational points. In this project we will look into Matrix Profile (MP) and the algorithms to compute it, which allegedly, "has the potential to revolutionize time series data mining" and implement the algorithms for a HPC cluster to study the time series data from gas turbines.

Recommended knowledge:

  1. Experience in parallel programming and High Performance Computing, e.g., MPI
  2. Familiarity with the Hadoop ecosystem and Apache Spark
  3. Knowledge in Machine Learning, data mining and time series analysis

Workpackages:

  1.  Study Matrix Profile and published algorithms to compute it
  2. Implementation of the algorithms and deployment on SuperMUC
  3. Analysis of gas turbine sensor data using the developed tool
  4. Performance analysis and optimization

What you will gain:

  1. Experience in working with large scale systems in one of the world's top super computing centers, LRZ, e.g., SuperMUC
  2. Experience with one of the most common large data analytics scenarios
  3. Working with our industry partner IfTA GmbH; on the frontiers in analysis and monitoring technology for industrial systems
  4. Insights into a real-world problem on supporting the energy grid in collaboration with gas turbine plant operator

 

 

BA/MA: Design and Implementation of a Benchmark for Predicting System Health in High Performance Computing

In a current project at our chair, we analyse modern High Performance Computing (HPC) systems with heterogeneous architectures towards exascale computing. A central element in this project is to find a reliable prediction method, which can determine the current system health state in a given HPC environment. Different methods are being evaluated by our research group currently. One of those is to run an efficient and fast benchmark in order to determine abnormality in system performance. Using such a benchmark and corresponding historical data, one is capable of predicting upcoming system faults, which may lead to a failure.

Contact: Dai Yang

Description: here

Thesis Topic in Cooperation with Karlsruhe Institut of Technology

Please Note:

The following topics are in cooperation with Karlsruhe Institute of Technology, Chair of Computer Architecture and Parallel Processing, Institute for Technical Informatics (Prof. Dr. Wolfgang Karl).

The work with in these topics is part of the ENVELOPE Project, which is sponsored and funded by German Federal Ministry of Education and Research

The collaboration requires close work and advising from both TUM and KIT side, hence travel to Karlsruhe and frequent Telcepresence (Expenses will be covered) is required. If you apply for these topics, please keep in mind that these requirements are mandatory for completing these thesis topics. 

As part of a research project, we also encourage all students working on these collaboration topics to publish their results on scientific workshops or conferences. 

For further details please contact Dai Yang.

MA: Runtime Prediction for OpenCL-Kernels in heterogeneous System using Machine Learning

Motivation Modern computer system consists a large number of heterogeneous processing units (PU). For the efficient usage of such system, the programmer must know different programming models for different hardware architectures. To relief the uses from the complexity, library based runtime system are developed by many current research institutions. Compute Kernels, such as BLAS and cuBLAS, Intel MKL, are developed by architectural experts to gain maximum performance. In addition, runtime scheduling system enables dynamic kernel selection. To achieve best result, information about implementation variance, such as execution time and cost for data transfer must be collect and know beforehand. Typically, these data is collected in a library, which produces extra overhead during runtime. In this thesis, we target this overhead and try to predict runtime by using advance machine learning technology. This Project is a cooperation between TUM and KIT in Karlsruhe, Germany.

For the runtime system HALadapt, a machine learning based runtime prediction was already developed in an earlier work. First, static code analysis is done and metrics such as number of operations, number of memory accesses, etc. is collected. These information, combined with knowledge of application runtime, is used as training data for the machine learning model. With the previous work, we have found out that the variance of prediction is relatively high, making such prediction less useful for standard CPU application. In this thesis, similar methods and measurements is to be collect and analysed for OpenCL-based applications.

Full Description

Contact: Dai Yang

BA/MA: Porting MLEM Algorithm for Heterogenous HPC-Systems

Background

HALadapt is a hardware abstraction layer implemented as a runtime system. The runtime system is based on a library approach that operates independently of the operating system and the underlying hardware. HALadapt offers the user the possibility to define a kernel that can be a complete algorithm or just a simple matrix multiplication. For such a kernel, the user can provide several implementation variants for different processing units and system states. When beforehand defined kernels are selected for execution, the runtime system is able to dynamically select the implementation that best fits to an eligible optimization objective and the current system state. The data necessary for a selection decision is collected empirically by monitoring the execution at runtime. So, HALadapt posses a data base which stores past execution times of an implementation with the problem size acting as a key. If enough data is available the data base also allows predicting execution times of unknown problem sizes. Besides the dynamic selection of the best fitting implementation, the runtime system also makes sure that the input data necessary for execution is available on the corresponding processing unit at the time of execution. Hence, the user does not have to care about data transfers.

Work Packages

• Understand GPU-MLEM, Hybrid MLEM and HALadapt

• Provide HALadapt Implementation for MLEM

• Performance Evaluation and Optimization

 

More Info

BA/MA: Porting Machine Learning Frameworks for Heterogenous HPC-Systems

Background

HALadapt is a hardware abstraction layer implemented as a runtime system. The runtime system is based on a library approach that operates independently of the operating system and the underlying hardware. HALadapt offers the user the possibility to define a kernel that can be a complete algorithm or just a simple matrix multiplication. For such a kernel, the user can provide several implementation variants for different processing units and system states. When beforehand defined kernels are selected for execution, the runtime system is able to dynamically select the implementation that best fits to an eligible optimization objective and the current system state. The data necessary for a selection decision is collected empirically by monitoring the execution at runtime. So, HALadapt posses a data base which stores past execution times of an implementation with the problem size acting as a key. If enough data is available the data base also allows predicting execution times of unknown problem sizes. Besides the dynamic selection of the best fitting implementation, the runtime system also makes sure that the input data necessary for execution is available on the corresponding processing unit at the time of execution. Hence, the user does not have to care about data transfers.

Work Packages

• Understand HALadapt

• Selection of one Machine Learning Framework for analysis. (e.g. MXnet, Caffe, TensorFlow)

• Provide HALadapt Implementation for MLEM

• Performance Evaluation and Optimization

 

More Info

BA/MA: Porting Rodinia Benchmarks for Heterogenous HPC-Systems

Background

HALadapt is a hardware abstraction layer implemented as a runtime system. The runtime system is based on a library approach that operates independently of the operating system and the underlying hardware. HALadapt offers the user the possibility to define a kernel that can be a complete algorithm or just a simple matrix multiplication. For such a kernel, the user can provide several implementation variants for different processing units and system states. When beforehand defined kernels are selected for execution, the runtime system is able to dynamically select the implementation that best fits to an eligible optimization objective and the current system state. The data necessary for a selection decision is collected empirically by monitoring the execution at runtime. So, HALadapt posses a data base which stores past execution times of an implementation with the problem size acting as a key. If enough data is available the data base also allows predicting execution times of unknown problem sizes. Besides the dynamic selection of the best fitting implementation, the runtime system also makes sure that the input data necessary for execution is available on the corresponding processing unit at the time of execution. Hence, the user does not have to care about data transfers. Work Packages • Understand GPU-MLEM, Hybrid MLEM and HALadapt • Provide HALadapt Implementation for MLEM • Performance Evaluation and Optimization

Work Packages

• Understand Rodinia Benchmarks and HALadapt

• Provide HALadapt Implementation for Rodinia Benchmarks

• Performance Evaluation and Optimization

 

More Information

Various MPI-Related Topics

Please Note:

MPI is a high performance programming model and communication library designed for HPC applications. It is designed and standardised by the members of the MPI-Forum, which includes various research, academic and industrial institutions. The current chair of the MPI-Forum is Prof. Dr. Martin Schulz

The following topics are all available as Master's Thesis and Guided Research. They will be advised and supervised by Prof. Dr. Martin Schulz himself, with help of researches from the chair. If you are very familiar with MPI and parallel programming, please don't hesitate to drop a mail to either Dai Yang or Prof. Dr. Martin Schulz

These topics are mostly related to current research and active discussions in the MPI-Forum, which are subject of standardisation in the next years. Your contribution achieved in these topics may make you become contributor to the MPI-Standard, and your implementation may become a part of the code base of OpenMPI.

Many of these topics require a collaboration with other MPI-Research bodies, such as the Lawrence Livermore National Laboratories and Innovative Computing Laboratory. Some of these topics may require you to attend MPI-Forum Meetings which is at late afternoon (due to time synchronisation worldwide). Generally, these advanced topics may require more effort to understand and may be more time consuming - but they are more prestigious, too. 

MA/GR: Porting LAIK to Elastic MPI & ULFM

LAIK is a new programming abstraction developed at LRR-TUM

  • Decouple data decompositionand computation, while hiding communication
  • Applications work on index spaces
  • Mapping of index spaces to nodes can be adaptive at runtime
  • Goal: dynamic process management and fault tolerance
  • Current status: works on standard MPI, but no dynamic support

Task 1: Port LAIK to Elastic MPI

  • New model developed locally that allows process additions and removal
  • Should be very straightforward

Task 2: Port LAIK to ULFM

  • Proposed MPI FT Standard for “shrinking” recovery, prototype available
  • Requires refactoring of code and evaluation of ULFM

Task 3: Compare performance with direct implementations of same models on MLEM

  • Medical image reconstruction code
  • Requires porting MLEM to both Elastic MPI and ULFM

Task 4: Comprehensive Evaluation

MA/GR: Lazy Non-Collective Shrinking in ULFM

ULFM (User-Level Fault Mitigation) is the current proposal for MPI Fault Tolerance

  • Failures make communicators unusable
  • Once detected, communicators an be “shrunk”
  • Detection is active and synchronous by capturing error codes
  • Shrinking is collective, typically after a global agreement
  • Problem: can lead to deadlocks

Alternative idea

  • Make shrinking lazy and with that non-collective
  • New, smaller communicators are created on the fly

Tasks:

  • Formalize non-collective shrinking idea
  • Propose API modifications to ULFM
  • Implement prototype in Open MPI
  • Evaluate performance
  • Create proposal that can be discussed in the MPI forum

MA/GR: A New FT Model with “Hole-Y” Shrinking

ULFM works on the classic MPI assumptions

  • Complete communicator must be working
  • No holes in the rank space are allowed
  • Collectives always work on all processes

Alternative: break these assumptions

  • A failure creates communicator with a hole
  • Point to point operations work as usual
  • Collectives work (after acknowledgement) on reduced process set

Tasks:

  • Formalize“hole-y” shrinking
  • Proposenew API
  • Implement prototype in Open MPI
  • Evaluate performance
  • Create proposal that can be discussed in the MPI Forum

MA/GR: Prototype for MPI_T_Events

With MPI 3.1, MPI added a second tools interface: MPI_T

  • Access to internal variables 
  • Query, read, write
  • Performance and configuration information
  • Missing: event information using callbacks
  • New proposal in the MPI Forum (driven by RWTH Aachen)
  • Add event support to MPI_T
  • Proposal is rather complete

Tasks:

  • Implement prototype in either Open MPI or MVAPICH
  • Identify a series of events that are of interest
  • Message queuing, memory allocation, transient faults, …
  • Implement events for these through MPI_T
  • Develop tool using MPI_T to write events into a common trace format
  • Performance evaluation

Possible collaboration with RWTH Aachen

 

MA/GR: Prototype Local MPI Sessions

New concept discussed in the MPI forum: MPI Sessions

  • Avoid global initialization if not necessary
  • Enable runtime system to manage smaller groups of processes
  • Provide groups for containment and resource isolation

Currently two modes of thinking

  • The main proposal builds on local operations and only at the end switches to global
  • Alternative: treat sessions as global objects

Tasks:

  • FormalizeMPI Sessions using local operations
  • Complete API proposal
  • Implement prototype in Open MPI or MVAPICH
  • Evaluate performance
  • Create proposal that can be discussed in the MPI Forum

Possible collaboration with EPCC (Edinburgh)

MA/GR: Prototype Global MPI Sessions

New concept discussed in the MPI forum: MPI Sessions

  • Avoid global initialization if not necessary
  • Enable runtime system to manage smaller groups of processes
  • Provide groups for containment and resource isolation

Currently two modes of thinking

  • The main proposal builds on local operations and only at the end switches to global
  • Alternative: treat sessions as global objects

Tasks:

  • Formalize MPI Sessions using global operations
  • Complete API proposal
  • Implement prototype in Open MPI or MPICH
  • Evaluate performance
  • Create proposal that can be discussed in the MPI Forum

Bonus:

  • Work with “local sessions” topic on a clean comparison

MA/GR: Evaluation of PMIx on MPICH and SLURM

PMIxis a proposed resource management layer for runtimes (for Exascale)

  • Enables MPI runtime to communicate with resource managers
  • Come out of previous PMI efforts as well as the Open MPI community
  • Under active development / prototype available on Open MPI

Tasks: 

  • Implement PMIx on top of MPICH or MVAPICH
  • Integrate PMIx into SLURM
  • Evaluate implementation and compare to Open MPI implementation
  • Assess and possible extend interfaces for tools 
  • Query process sets

MA/GR: Active Messaging for Charm++ or Legion

MPI was originally intended as runtime support not as end user API

  • Several other programming models use it that way
  • However, often not first choice due to performance reasons
  • Especially task/actor based models require more asynchrony

Question: can more asynchronmodels be added to MPI

  • Example: active messages

Tasks:

  • Understand communication modes in an asynchronmodel
  • Charm++: actor based (UIUC)•Legion: task based (Stanford, LANL)
  • Propose extensions to MPI that capture this model better
  • Implement prototype in Open MPI or MVAPICH
  • Evaluation and Documentation

Possible collaboration with LLNL and/or BSC

MA/GR: Crazy Idea: Multi-MPI Support

MPI can and should be used for more than Compute

  • Could be runtime system for any communication
  • Example: traffic to visualization / desktops

Problem:

  • Different network requirements and layers
  • May require different MPI implementations
  • Common protocol is unlikely to be accepted

Idea: can we use a bridge node with two MPIs linked to it

  • User should see only two communicators, but same API

Tasks:

  • Implement this concept coupling two MPIs
  • Open MPI on compute cluster and TCP MPICH to desktop
  • Demonstrate using on-line visualization streaming to front-end
  • Document and provide evaluation
  • Warning: likely requires good understanding of linkers and loaders

Quantum Computing

The current generation of silicon-based computer hardware leads to new challenges of computational efficiency. With the transistor sizes close to molecular level, the only increase in performance can be gained by an increase in additional parallelism. This creates in particular problems for simulations (e.g. Weather/Climate simulations) which strongly rely on an increase in computational performance to finish the simulation within a particular time frame. Here, Quantum Computing goes beyond the physical limitations and provides new ways to run algorithms.

BA/MA: Numerical algorithms on Quantum computers

You will get familiar with the method of quantum computing and existing literature. Based on this, simple algorithms will be developed and executed on quantum computers or simulators, enabling you to assess how algorithms can benefit from quantum computing.

Prerequisites: discrete structures, algorithms, numerical mathematics, basic understanding of quantum mechanics

Contact: Martin Schreiber (martin.schreiber@tum.de)

Software tools for High-performance Computing

BA/MA: Portable performance assessment for programs with flat performance profile

The Nucleus for European Modelling of the Ocean (NEMO) is commonly used in weather and climate simulations to account for the ocean-atmosphere interactions. Here, high-performance computing optimizations are mandatory to keep up with the steadily increasing demands of higher accuracy which is mainly driven by higher resolutions. NEMO itself consists out of a large set of kernel functions with each one having its own optimization challenges. As a first step, performance characteristics should be extracted from these kernels by using performance counters. Based on this information, potential optimizations should be pointed out.

NEMO: https://www.nemo-ocean.eu/

Prerequisites: high-performance computing, computer architectures

Contact: Martin Schreiber (martin.schreiber@tum.de)

BA/MA: Performance portable job submission

Super-computers not only differ in their computing architectures, but also with the software stack. All together, executing simulations on different super computers leads to additional challenges to ensure optimal resource utilization such as pinning, allocation of computing nodes, etc. This project would be on developing tools to provide a cross-super-computer portable way to generate job scripts, submit jobs and accumulate the output data of jobs.

Contact: Martin Schreiber (martin.schreiber@tum.de)

Linear Algebra

BA/MA: Fast implicit time integration

Solving system of equations is a common task in scientific computing. However, this often doesn’t take into account the physical meaning of the underlying system of equations to be solved. Instead of solving for a global system of equations, this project will exploit the physical meaning of hyperbolic PDEs and investigate solvers which exploit locality features of wave propagations. A success in this would lead to a new way of efficient time integrations in climate and weather simulations.

Expected outcome: Highly efficient solver for implicit time integration

Contact: Martin Schreiber (martin.schreiber@tum.de)

Weather (and climate) simulations

Weather forecasts contribute to our daily life. Putting on proper clothing for the rain in 2 hours, planning hiking trips and improving harvesting times for farmers, to name just a few examples. Being a well established research area, a lot of advances led to the current state-of-the-art dynamical cores (the computing parts to simulate the fluid dynamics parts of our atmosphere). Several projects are available which work on investigating new algorithms and numerical methods to improve weather and climate forecasting models.

General prerequisites: interest in numerical algorithms, time integration of ODEs, high-performance computing

Various terminology will be used in the description of the potential projects which is explained as follows:

SWE: Using a full three-dimensional atmospheric core would lead to significant computational requirements, even if only horizontal effects should be studied. Therefore, the Shallow-Water Equations (SWE) are used as a proxy to assess properties of numerical methods for horizontal discretization aspects, using coefficients to represent properties of the full atmospheric equations.

REXI: Rational approximation of exponential integrators: Typically, time integration methods suffer of the so-called CFL condition. For weather and climate simulations, this leads to severe limitations regarding the number of required time step sizes. In contrast, rational approximations of exponential integrators allow to compute arbitrarily long time step sizes for linear operators and based on a rational approximation to spread the computational workload across additional computing resources.
Linear SWE on plane: https://journals.sagepub.com/doi/10.1177/1094342016687625
Linear SWE on sphere: https://onlinelibrary.wiley.com/doi/abs/10.1002/nla.2220

OpenIFS: This software is the open-source version of the dynamical core which is used in the current operational forecasting by the European Centre for Medium-Range Weather Forecasts (ECMWF). OpenIFS website: https://www.ecmwf.int/en/research/projects/openifs

SWEET: This software development is a testing platform realizing e.g. the SWE on the sphere to quickly assess the quality of new numerical methods for time integration.
Repository: github.com/schreiberx/sweet SWEET website: https://schreiberx.github.io/sweetsite/

ML-SDC / PFASST: Spectral deferred correction (SDC) methods allow to construct higher-order time integrators with a combination of lower-order (e.g. forward/backward Euler) accurate time steppers. This can be combined with a multi-level (similar to multi-grid) approach (ML-SDC). Additionally, we can execute speculative solutions in time, leading to PFASST.
ML-SDC: https://linkinghub.elsevier.com/retrieve/pii/S0021999118306442
PFASST website: https://pfasst.lbl.gov/projects

Contact for all projects mentioned below: Martin Schreiber (martin.schreiber@tum.de)

BA/MA: Assessing REXI in OpenIFS

Extend REXI to the model used by the European Centre for Medium-Range Weather Forecasts (ECMWF) and assess its performance.

Expected outcome: Performance assessment on using REXI in combination with OpenIFS

Contact: Martin Schreiber (martin.schreiber@tum.de)

BA/MA: Implementation and performance assessment of ML-SDC/PFASST in OpenIFS

Extend ML-SDC and PFASST time integration methods into OpenIFS. This can be done step-by-step, depending on the encountered complexity. First implementing the SDC time integration method, next exploiting the multi-level representation and finally, extending it with the parallel-in-time PFASST.

Expected outcome: Studying accuracy, wallclock time and extreme-scalability of a weather simulation with OpenIFS

Potential collaborators: Michael Minion (Lawrence Berkeley National Laboratory, US), Francois Hamon (Total E&P, US)

BA/MA: Vertical time integration

In dynamical cores for weather and climate simulations, the PDE is separated into horizontal and vertical parts. This project would investigate new (exponential) time integration methods in the vertical and study errors in these methods.

Expected outcome: Study the utilization of exponential time integration methods for the vertical time integration for weather simulations.

BA/MA: Semi-Lagrangian methods with Parareal

Semi-Lagrangian methods allow significantly larger stable time steps for climate and weather simulations compared to purely Eulerian methods. However, they also result in an increase in errors for very large time step sizes. Preliminary studies with the Burgers' equation showed, that these errors can be fixed using a parallelization-in-time with the Parareal method. This project would be to investigate this method and assess its feasibility for climate and weather simulations.

Potential collaborators: Pedro S. Peixoto (University of Sao Paulo)

BA/MA: Non-interpolating Semi-Lagrangian Schemes

Semi-Lagrangian schemes typically require spatial interpolation. However, this is challenging for higher orders in mathematical as well as computational aspects. This project would investigate a non-interpolating Semi-Lagrangian scheme suggested decades ago (link to paper) and assess its importance on current computer architectures and for higher-order time integration methods.

Potential collaborators: Pedro S. Peixoto (University of Sao Paulo)

Contact: Martin Schreiber (martin.schreiber@tum.de)

Time integration (generic)

Contact: Martin Schreiber (martin.schreiber@tum.de)

BA/MA: (Exponential) implicit solvers for non-linear equations

This project will study the implicit time integration solvers for selected non-linear PDEs (e.g. Burgers’ and shallow-water equations). A first test case will be to implement a fully-implicit time integration method for the non-linear shallow-water equations. Once this test case is working, extensions with exponential integrators (see REXI) and preconditioners should be studied.

Expected outcome: Assessment of the feasibility to use implicit time integration for the non-linear SWE depending on different initial conditions.

BA/MA: Unstable and inconsistent time integration for higher accuracy

A common expected mathematical property of a time integration method is to be stable and consistent. In particular the consistency applies only in the limit where the timestep size trends towards 0. However, such small time step sizes are never used operationally, e.g. for weather forecasting. This project will study possibilities to design inconsistent time integration methods targeting reduced errors compared to time integration methods which are consistent.

Expected outcome: New time integration method for larger time step sizes.

BA/MA: Time-splitting methods for exponential integrators for the non-linear SWE

A common way to time integrate PDEs is to use a splitting approach, separating the stiff from the non-stiff parts. However, this leads to missing interactions between the linear and non-linear equations. This project would investigate such splitting errors in a numerical way.

Expected outcome: Gain understanding in time-splitting errors for linear and non-linear equations.

BA/MA: Machine learning for non-linear time integration

Machine learning technologies get increasingly common also in scientific computing. Whereas linear parts can be solved in efficient ways with particular numerical methods, the non-linear parts still pose grand challenges. One way to treat them might be with a machine learning approach.

Possible outcome: Feasibility study to represent properties of ordinary and partial differential equations with machine learning.

BA/MA: Exponential integrators with forcing

Exponential integrators are excellent for linear operators. However, additional challenges arise if including a forcing term (e.g. for tidal waves). This project will investigate different methods to include the forcing terms in a way to be still able to compute arbitrarily long time step sizes.

Possible outcome: New method to include forcing into exponential integrators and to compute arbitrarily long time step sizes.

BA/MA: Exponential integrators and higher-order Semi-Lagrangian methods

Semi-Lagrangian methods allow to overcome time step restrictions imposed by the non-linear advection parts. However, such semi-Lagrangian methods are typically limited by an order of 2. This project would investigate the development of higher-order methods and to assess the applicability them for weather and climate simulations.

Potential collaborator: Pedro S. Peixoto (University of Sao Paulo)

BA/MA: Time integration with Cauchy Contour integral methods

Various time integration methods can be formulated in terms of Cauchy Contour integral methods. Directly formulating such methods as a discrete Contour integral can lead to new insight and advantageous properties.

Potential outcome: Improve the way how time integration is done nowadays.

Potential collaborator: Pedro S. Peixoto (University of Sao Paulo)