Gensyn · 3 hours ago
Compiler Engineer - Distributed ML Training
Maximize your interview chances
BlockchainCryptocurrency
Insider Connection @Gensyn
Get 3x more responses when you reach out via email instead of LinkedIn.
Responsibilities
Lowering deep learning graphs - from common frameworks (PyTorch, Tensorflow, Keras, etc) down to an IR for training and inference - with particular focus on ensuring reproducibility
Writing novel algorithms - for transforming intermediate representations of compute graphs between different operator representations
Owning two of the following compiler areas:
Front-end - deal with the handshaking of common Deep Learning Frameworks with Gensyn's IR for internal IR usage. Write Transformation passes in ONNX to alter IR for middle-end consumption
Middle-end - write compiler passes for training-based compute graphs, integrate reproducible Deep Learning kernels into the code generation stage, and debug compilation passes and transformations as you go
Back-end : lower IR from middle-end to GPU target machine code
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
Compiler knowledge - base-level understanding of a traditional compiler (LLVM, GCC) and graph traversals required for writing code for such a compiler
Solid software engineering skills - practicing software engineer, having significantly contributed to/shipped production code
Understanding of parallel programming - specifically as it pertains to GPUs
Ability to operate on: High-Level IR/Clang/LLVM up to middle-end optimisation; and/or Low Level IR/LLVM targets/target-specific optimisations - particularly GPU specific optimisations
Highly self-motivated with excellent verbal and written communication skills
Comfortable working in an applied research environment with extremely high autonomy
Preferred
Architecture understanding - full understanding of a computer architecture specialised for training NN graphs (Intel Xeon CPU, GPUs, TPUs, custom accelerators)
Compilation understanding - strong understanding of compilation in regards to one or more High-Performance Computer architectures (CPU, GPU, custom accelerator, or a heterogenous system of all such components)
Proven technical foundation - in CPU and GPU architectures, numeric libraries, and modular software design
Deep Learning understanding - both in terms of recent architecture trends + fundamentals of how training works, and experience with machine learning frameworks and their internals (e.g. PyTorch, TensorFlow, scikit-learn, etc.)
Exposure to a Deep Learning Compiler frameworks - e.g. TVM, MLIR, TensorComprehensions, Triton, JAX
Kernel Experience - Experience writing and optimizing highly-performant GPU kernels
Rust experience - systems level programming experience in Rust
Open-source contributions to existing compilers/frameworks with a strong preference for ML compilers/frameworks.
Benefits
Competitive salary + share of equity and token pool
Fully remote work - we hire between the West Coast (PT) and Central Europe (CET) time zones
Relocation Assistance - available for those that would like to relocate after being hired (anywhere from PST through CET time zones)
4x all expenses paid company retreats around the world, per year
Whatever equipment you need
Paid sick leave
Private health, vision, and dental insurance- including spouse/dependents [🇺🇸 only]
Company
Gensyn
Gensyn is a machine learning compute protocol for the world's deep learning models.