Location: all cities,CA, USA
About the Company
This company develops generative video models that allow you to create animated pictures with ease, incorporating your own existing audio or utilizing text-to-speech models. Having raised in excess of $10m and having caught the imagination with their first two foundational model releases, they are expanding the team in San Francisco.
About the Role
Generative AI Model Research and Development: Lead cutting-edge research in the development of generative AI models, particularly diffusion models, for applications in image, video, text, and audio synthesis. The role emphasizes creating models that deliver high fidelity, realism, and controllability in multi-modal outputs, such as dynamic avatars and visual effects.
Advanced Model Conditioning and Optimization: Develop and refine advanced conditioning techniques, leveraging methods such as prompt-tuning, latent space manipulation, and attention-based control mechanisms to enhance model precision, controllability, and adaptability for specific user-defined outputs.
High-Performance Distributed Training: Architect and manage large-scale training pipelines across distributed systems, utilizing frameworks like Ray, PyTorch Distributed, and Horovod. Handle multi-node, multi-GPU environments and optimize training for scalability on petabyte-scale datasets, ensuring efficient use of cloud and on-premise compute resources.
Core Responsibilities:
Technical Skills:
Research and Innovation:
If you are working in generative AI and want to join an early-stage company with a lot of upside potential on the equity send your resume today.
This is an on-site position in San Francisco.