5 years experience in Commercial Software Development (IT) and 10 years in HPC research and development
ARCHER is the UK main supercomputing facility and as part of the Center of Excellence, the focus is on supporting the scientist across Climate & Weather, Material Chemistry, Computational Fluid Dynamics (CFD), Molecular Dynamics simulation domains to better utilize the machine. This not only involves optimisation, benchmarking, and porting but also studying application usage of HPC resource and identifying risks or bottlenecks for both the HPC service and applications.
Led the research of projects in IO optimisation, IO middleware and Workflow optimisation and design/development of Lustre Analytics for System Software (LASSi). LASSi aims to model IO resource utilization in an HPC system to analyze reported incidents of application slowdown. This is being currently updated to model CPU, memory utilization, network and energy, enable refactoring. Further LASSi is also being implemented for UK MetOffice.
Led the HPE’s efforts in SODALITE project. Designed and developed Application Optimiser component called MODAK that automates optimisation of application deployments in heterogenous targets like HPC and Cloud. Based on performance model and application user selected optimisations, MODAK aims to map optimal parameters to application deployment target and build an optimised container. For AI training, MODAK explores the usage of Graph compilers, target-specific libraries and optimisation of ETL pipeline.
The project also explore the deployment of traditional HPC application in Cloud like targets.
Research and development of a scalable distributed data analytics engine/framework (in development)
As part of the LFRIC (replacement for Unified Model) team, involved in development of PSyclone: a code generation and optimisation system for finite element codes by studying/profiling the application kernels and optimising for different hardware like X86, KNL and GPUs. This project was funded by Intel as part of an IPCC.
Developing and optimising Lattice simulations used in determining the Standard Model parameter Vus.
Porting LatticeQCD application to many modern HPC machines like Bluegene-Q, Bluegene-P, GPUs.
Developed a Machine Learning (statistical data analysis) application to model simulation data and extract Standard Model parameter Vus.
Developed and studied of different Iterative solvers/pre-conditioners for Lattice simulations.
Ported the” Automating lattice perturbation theory Application” to GPGPUs with significant speedup when compared to Intel Xeon CPUs. New programming interfaces like PGI was studied along with the traditional CUDA-C on both Tesla and FERMI (NVIDIA) architectures.
For the EBiz project, main responsibilities include Requirement Analysis, Architectural Design, Design and Development of applications to support the Government of India initiative to improve eBusiness. Management of Integration team of strength five.
Working for the client JP Morgan Chase, supporting document managemenat and Test automation.
Successfully managed a team of 25 members.
Led the porting and optimisation of the Met Office Unified Model (MetUM) to different super computing platforms, to facilitate effective collaboration with scientists around the world. Benchmarking, Profiling, Analysis and Optimisation using Cray tools like CrayPat and Reveal; Improved scaling and Pperformance improvement of up to 25% is achieved.
Led the technical support of FEBBRAIO project, which aims Weather and Climate modelling at Petascale. This project was delivered in half the planned time. Also, as a part of the project, more than a petabyte of data was analysed and managed.