AI Compiler and Performance Engineer
Elastixai
Elastix AI AI Compiler and Performance Engineer Seattle, WA · Full time
We are looking for a deeply technical AI Compiler & Performance Engineer who thrives at the intersection of ML, compilers, and hardware.
About Elastix AI
Description
Job Title: AI Compiler and Performance Engineer
Company: ElastixAI, Inc.
Location: Seattle, WA (Hybrid - 3 days/week in office)
About ElastixAI
ElastixAI is an early-stage startup on a mission to reinvent AI inference infrastructure from the ground up. We’re building a next-generation inference platform that delivers unprecedented efficiency by tightly integrating machine learning, software stack, and custom hardware. Our philosophy is simple: the best performance comes from holistic co-design, where every layer, from model architecture to kernels to silicon, works in harmony. If you’re excited about pushing AI performance to physical limits, and about shaping the future of large-scale inference, we’d love to meet you.
Role Summary
We are looking for a deeply technical AI Compiler & Performance Engineer who thrives at the intersection of ML, compilers, and hardware. In this role, you will design how LLM operations decompose into highly efficient proprietary kernel primitives, optimize execution pipelines, and co-develop abstractions with our hardware and ML teams. This is not a “typical” compiler role, it’s a chance to rethink the entire AI compute stack. At ElastixAI, if improving inference efficiency requires inventing new quantization schemes, rethinking graph-level optimizations, or modifying the hardware ISA, we do it. You’ll have end-to-end ownership to explore radically new ideas and make them real.
Key Responsibilities
- Break down LLM and transformer workloads into fine-grained primitives tailored to our proprietary compute hardware.
- Design and implement IR transformations, graph optimizations, kernel lowering, and code generation for novel hardware architectures.
- Collaborate with ML researchers to co-design algorithmic optimizations that yield real end-to-end performance gains.
- Work closely with hardware architects to refine microarchitectural features, instruction sets, memory hierarchies, and execution models.
- Build performance models, profiling tools, and benchmarking frameworks to identify bottlenecks and guide design decisions.
- Prototype and validate improvements across the entire stack — from PyTorch/XLA-level passes to custom kernel implementations.
- Contribute to shaping the overall system architecture of a first-of-its-kind inference engine.
Required Qualifications
- BS/MS/PhD in Computer Science, Software Engineering, or a related field.
- Deep experience building compilers, optimizing kernels, or working with ML frameworks at a systems level.
- Strong proficiency in one or more programming languages such as Python and C++.
- Strong understanding of one or more of the following:
- LLM architectures and transformer internals
- MLIR, LLVM, XLA, TVM, Triton, or similar compiler infrastructures
- GPU/TPU/FPGA/ASIC compute models, memory hierarchies, and parallel execution
- Quantization, sparsity, or algorithmic optimization for deep learning
- Deep expertise on ML frameworks (e.g., PyTorch, TensorFlow, JAX) and understanding of ML model deployment challenges.
- Solid understanding of software engineering best practices, including data structures, algorithms, and testing.
- Thinking in terms of latency, cycles, memory bandwidth, and arithmetic intensity, not just algorithms.
- Excellent problem-solving abilities and a knack for tackling complex technical challenges.
- Excited to collaborate across ML, hardware, and software boundaries to invent something fundamentally new.
- Strong communication skills and a proven ability to collaborate effectively in a cross-functional team environment.
- Ability to thrive in a fast-paced, dynamic startup environment.
Preferred / Bonus
- PhD in Computer Science, Software Engineering, or a related field.
- Experience with custom hardware accelerators for ML inference.
- Contributions to open-source compiler or ML systems projects.
- Prior startup experience or background building first-generation systems.
What We Offer:
- A chance to be a foundational engineer in an innovative AI startup
- A dynamic and collaborative work environment and the change to have a significant impact on new technology
- The opportunity to work on challenging problems at the intersection of ML, software, and systems.
- Competitive compensation and startup equity package
- Comprehensive medical, dental, and vision coverage (100% paid by employer)
- Life insurance and AD&D
- Flexible Time Off (FTO)
- 12-paid holidays
- Paid parental leave
- Gym or fitness benefit
- Commuter benefit
- Weekly catered lunches in the office
- Investment in employee learning & development
Salary
$130,000 - $250,000 per year