Skip to main content
Riya Bisht

01-Google Summer Of Code @ CERN

Hey, this summer I am working with CERN to enable CUDA support in Cppyy-Numba generated IR. I will be sharing my daily/weekly devlogs here. We are tackling with Cppyy project, this project is the successor of the project, PyROOT at CERN. Cppyy is an automatic, python C++ binding generator, which is used when writing C++ code in the Python ecosystem and vice versa.

Why do we need the support of both languages? Why interoperability of these languages are important?

To answer this, let's say, we want to write a scientific computing application that requires Python libraries. As we know python is a dynamic and runtime language so it's slow as compared to compile-time languages. To enable, and run optimizations and performant code on GPUs, we need C++. But is this the end of it? Probably not, whether a language is a compile-time lang or runtime lang doesn't affect the performance in a heterogeneous computing environment. What does matter is the performance drop due to cross-language overhead. Interactivity between Python and C ++ is complex due to their varying data types, and different language constructs. In short, Cppyy is a front end that provides bindings through which we can write performant C++ inside the Python codebase easily using wrappers. It uses cling, a clang-based C++ interpreter, and a clingwrapper as a cppy-backend. Later in the coming blogs, I'll talk about the design choices and the architecture of the project, including the components.

Currently, I am working on adding CUDA support through the Python JIT compiler Numba. Numba is used to jit-compile the Python code to increase performance and to add optimizations.