Location: Mountain View,CA, USA
Microsoft Corporation Senior Software Engineer Mountain View , California Apply Now We own inference performance of OpenAI and other state of the art Large Language Models (LLMs) and work directly with OpenAI on the models hosted on the Azure OpenAI service serving some of the largest workloads on the planet with trillions of inferences per day in major Microsoft products, including Office, Windows, Bing, SQL Server, and Dynamics.As a Senior Software Engineer on the team, you will have the opportunity to work on multiple levels of the AI software stack, including the fundamental abstractions, programming models, compilers, runtimes, libraries and APIs to enable large scale training and inferencing of models. You will benchmark OpenAI and other LLMs for performance on GPUs and Microsoft HW, debug and optimize performance, monitor performance and enable these models to be deployed in the shortest amount of time and the least amount of HW possible helping achieve Microsoft Azure's capex goals.This is a hands-on technical role requiring software design and development skills. We're looking for someone who has a demonstrated history of solving technical problems and is motivated to tackle the hardest problems in building a full end-to-end AI stack.An entrepreneurial approach and ability to take initiative and move fast are essential.Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.By applying to this position, location is flexible and remote work is possible.Relocation assistance is unavailable for this role.ResponsibilitiesAs a Senior Software Engineer on the team the common tasks of the job would include, but not be limited to:Identify and drive improvements to end-to-end inference performance of OpenAI and other state of the art LLMsMeasure, benchmark performance on Nvidia/AMD GPU's and first party Microsoft siliconOptimize and monitor performance of LLMs and build SW tooling to enable insights into performance opportunities ranging from the model level to the systems and silicon level, help reduce the footprint of the computing fleet and achieve Azure AI capex goalsEnable fast time to market of LLMs/models and their deployments at scale by building SW tools that afford velocity in porting models on new Nvidia, AMD GPUs and Maia siliconDesign, implement, and test functions or components for our AI/DNN/LLM frameworks and toolsSpeeding up/reducing complexity of key components/pipelines to improve performance and/or efficiency of our systemsCommunicate and collaborate with our partners both internal and externalEmbody ourCulture ( andValues ( QualificationsBachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to C, C++, or Python2+ years' practical experience working on high performance applications and performance debug and optimization on CPU's/GPU's.Other RequirementsAbility to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:Microsoft Cloud Background Check:This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.Preferred QualificationsTechnical background and solid foundation in software engineering principles, computer architecture, GPU architecture, HW neural net acceleration.Experience in end-to-end performance analysis and optimization of state of the art LLMs, HPC applications including proficiency using GPU profiling tools.Experience in DNN/LLM inference and experience in one or more DL frameworks such as PyTorch, Tensorflow, or ONNX Runtime and familiarity with CUDA, ROCm, Triton.Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers.Software Engineering IC4 - The typical base pay range for this role across the U.S. is USD $117,200 - $229,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $153,600 - $250,200 per year.Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: Microsoft will accept applications and processes offers for these roles on an ongoing basis.#aiplatform #opensource #SWE24 #SHPE24MSFTMicrosoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations ( .#J-18808-Ljbffr