Portfolio Careers

Hardware Design Engineer, AI Inference Engine

Elastixai

Elastixai

Software Engineering, Other Engineering, Design, Data Science
Seattle, WA, USA
USD 120k-200k / year + Equity
Posted on Dec 11, 2025

Elastix AI Hardware Design Engineer, AI Inference Engine Seattle, WA · Full time

We are seeking a visionary and hands-on Hardware Design Engineer to contribute to the design, definition, and implementation of our core AI inference engine.

About Elastix AI

We are building the next-gen AI inference platform.

Description

Location: Seattle, WA (Hybrid - 3 days/week in office)

About ElastixAI:

ElastixAI is an early-stage startup poised to revolutionize AI inference infrastructure. We are developing a cutting-edge AI inference solution that dramatically improves efficiency through a holistic co-design approach, spanning from machine learning optimizations and a highly specialized software stack to the inference engine and underlying cloud hardware. We believe in providing a customizable and optimal inference experience, much like tailoring a high-performance computing system to specific needs.

Role Summary:

We are seeking a visionary and hands-on Hardware Design Engineer to contribute to the design, definition, and implementation of our core AI inference engine. This is a deeply technical role where you will be instrumental in translating AI into a highly efficient hardware design. You will be at the center of our co-design philosophy, working to ensure our inference engine is perfectly harmonized with our ML strategies, software stack, and cloud hardware targets to deliver unparalleled performance and efficiency for next-generation AI models.

Key Responsibilities:

  • Contribute to the architectural definition, design, and implementation of a novel AI inference engine optimized for our specific ML workloads.
  • Collaborate closely with ML engineers to understand and influence ML directions
  • Work hand-in-hand with software engineers to define a seamless hardware-software interface, ensuring the inference engine is highly programmable, efficient, and easy to integrate into our broader software stack and compiler.
  • Partner with cloud engineers to ensure the inference engine architecture aligns with target cloud hardware capabilities, deployment strategies, and performance/cost objectives.
  • Model and analyze the performance, power, and area (PPA) trade-offs of different architectural choices.
  • Stay at the forefront of AI accelerator research, identifying emerging techniques and technologies relevant to our co-design approach.
  • Contribute to the RTL design, simulation, and verification efforts for the inference engine components.
  • Drive the hardware roadmap for the inference engine, anticipating future AI model trends and optimization opportunities.
  • Foster a culture of innovation and technical excellence within a highly interdisciplinary engineering team.

Required Qualifications:

  • BS, MS or PhD in Computer Engineering, Electrical Engineering, or a related field.
  • Proven experience (5+ years) in hardware design, with a strong focus on designing/implementing hardware for AI/ML acceleration.
  • Deep understanding of modern AI/ML models, particularly LLMs, and their computational characteristics.
  • Experience with hardware implementation of ML optimization techniques (e.g., sparsity, quantization, pruning).
  • Proficiency in Verilog or SystemVerilog for RTL design and simulation.
  • Strong understanding of memory system architecture, on-chip interconnects, parallel processing, and distributed computing.
  • Excellent problem-solving skills and the ability to analyze complex systems.
  • Exceptional communication and interpersonal skills, with a demonstrated ability to work effectively in a highly interdisciplinary environment, collaborating with ML, software, and cloud/systems engineers.
  • Ability to thrive in a fast-paced, dynamic startup environment with a strong bias for action/execution

Preferred/Bonus Qualifications:

  • Knowledge of compiler technologies for AI models (e.g., MLIR, TVM).
  • Familiarity with performance modeling and analysis tools.
  • Experience with system-level integration and debugging.
  • Contributions to relevant research publications or open-source projects.
  • Understanding of cloud computing environments and deploying hardware accelerators in the cloud.
  • High-speed inter-chip networking experience

What We Offer:

  • A chance to be a foundational engineer in an innovative AI startup.
  • A dynamic and collaborative work environment and the change to have a significant impact on new technology
  • The opportunity to work on challenging problems at the intersection of ML, software, and systems.
  • Competitive compensation and startup equity package
  • Comprehensive medical, dental, and vision coverage (100% paid by employer)
  • Flexible Time Off (FTO)
  • Paid parental leave
  • Company sponsored 401K Plan
  • Gym or fitness benefit
  • Commuter benefit
  • Investment in employee learning & development

Salary

$120,000 - $200,000 per year