We are seeking passionate Machine Learning Engineers to join our Inference team, focusing on the creative applications of generative AI models.
Requirements
- 7+ years working on productionizing machine learning systems, including inference pipeline development
- Expert level knowledge on writing and running python services at scale
- 5+ years working on python scientific stack, pyTorch and at least one high-performance inference framework (e.g. Triton and TensorRT)
- Deep understanding of Diffusion Architecture
- Experience profiling and optimizing deep neural networks on Nvidia GPUs, using profiling tools such as NVIDIA Nsight
- Experience with python-based image manipulation/encoding/decoding frameworks, such as OpenCV
- Experience deploying to cloud orchestration systems such as Kubernetes and cloud providers such as AWS, GCP, and Azure
- Experience with Docker
- Ability to rapidly prototype solutions and iterate on them with tight product deadlines
- Strong communication, collaboration, and documentation skills
- Experience with the open-source ML ecosystem (HuggingFace, W&B, etc.)