Unlimited Job Postings Subscription - $99/yr!

Job Details

Member of Technical Staff - Distributed Training Engineer

  2026-01-26     Liquid AI     all cities,AK  
Description:

About Liquid AI

Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.

The Opportunity

Our Training Infrastructure team is building the distributed systems that power our next-generation Liquid Foundation Models. As we scale, we need to design, implement, and optimize the infrastructure that enables large-scale training.

This is a high-ownership training systems role focused on runtime/performance/reliability (not a general platform/SRE role). You'll work on a small team with fast feedback loops, building critical systems from the ground up rather than inheriting mature infrastructure.

While San Francisco and Boston are preferred, we are open to other locations.

What We're Looking For

We need someone who:

  • Loves distributed systems complexity: Our team builds systems that keeps long training runs stable, debugs training failures across GPU clusters, and improves performance.
  • Wants to build: We need builders who find satisfaction in robust, fast, reliable infrastructure.
  • Thrives in ambiguity: Our systems support model architectures that are still evolving. We make decisions with incomplete information and iterate quickly.
  • Aligns with team priorities and delivers: Our best engineers align with team priorities while pushing back with data when they see problems.
The Work
  • Design and build core systems that make large training runs fast and reliable
  • Build scalable distributed training infrastructure for GPU clusters
  • Implement and tune parallelism/sharding strategies for evolving architectures
  • Optimize distributed efficiency (topology-aware collectives, comm/compute overlap, straggler mitigation)
  • Build data loading systems that eliminate I/O bottlenecks for multimodal datasets
  • Develop checkpointing mechanisms balancing memory constraints with recovery needs
  • Create monitoring, profiling, and debugging tools for training stability and performance
Desired Experience

Must-have:
  • Hands-on experience building distributed training infrastructure (PyTorch Distributed DDP/FSDP, DeepSpeed ZeRO, Megatron-LM TP/PP)
  • Experience diagnosing performance bottlenecks and failure modes (profiling, NCCL/collectives issues, hangs, OOMs, stragglers)
  • Understanding of hardware accelerators and networking topologies
  • Experience optimizing data pipelines for ML workloads
Nice-to-have:
  • MoE (Mixture of Experts) training experience
  • Large-scale distributed training (100+ GPUs)
  • Open-source contributions to training infrastructure projects
What Success Looks Like (Year One)
  • Training throughput has increased
  • Overall training efficiency/cost has improved
  • Training stability has improved (fewer failures, faster recovery)
  • Data loading bottlenecks are eliminated for multimodal workloads
What We Offer
  • Greenfield challenges: Build systems from scratch for novel architectures. High ownership from day one.
  • Compensation: Competitive base salary with equity in a unicorn-stage company
  • Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
  • Financial: 401(k) matching up to 4% of base pay
  • Time Off: Unlimited PTO plus company-wide Refill Days throughout the year


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search