WHAT YOU DO AT AMD CHANGES EVERYTHING
We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance.
THE ROLE:
AMD is looking for an AI/ML software architect who is passionate about improving the performance of key Machine Learning applications and benchmarks on NPU. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology.
THE PERSON:
We are looking for a dynamic, energetic software architect to join our growing team in the AI group. As an ML software stack architect, you will be responsible for architecting the runtime stack, defining the operator mapping and dataflow, operator mapping, and scheduling on AMD’s XDNA Neural Processing Units that power cutting-edge generative models like Stable diffusion, SDXL-Turbo, Llama2, etc. Your work will directly impact the efficiency, scalability, and reliability of our ML applications. If you thrive in a fast-paced environment and love working on cutting-edge machine learning inference, this role is for you.
Communicate effectively and work optimally with different teams across AMD.
KEY RESPONSIBILITIES:
- Define software stack that interfaces with open source runtime environments like ONNX, PyTorch, and NPU compiler.
- Define runtime operator scheduling, memory management, operator dataflow based on tensor residency.
- Propose algorithmic optimization in operators that are mapped to CPU using AVX512.
- Interface with ONNX / Pytorch runtime engines to deploy the model on CPUs.
- Develop efficient model loading mechanisms to minimize startup latency.
- Collaborate with kernel developers to integrate ML operators seamlessly into high-level ML frameworks.
- Design and implement C++ runtime wrappers, APIs, and frameworks for ML model execution.
- Architect optimized CPU alternative implementation for ML operators that are not supported on NPUs.
PREFERRED EXPERIENCE:
- Detailed and thorough understanding of ONNX, PyTorch runtime stack, open source frameworks.
- Strong experience in scheduling operators between NPU, GPU, and CPU.
- Experience with graph parsing, operator fusion.
- Strong experience with AVX, AVX512 instruction set, cache behavior in CPU.
- Strong experience with managing system memory.
- Detailed understanding of compiler interfacing with runtime stack, JIT compilation flow.
- Strong programming skills in C++, Python.
- Experience with ML frameworks (e.g., TensorFlow, PyTorch) is required.
- Experience with ML models such as CNN, LSTM, LLMs, Diffusion is a must.
- Experience with ONNX, Pytorch runtime stacks is a must.
- Knowledge of parallel computing is a bonus.
- Familiarity with containerization (Docker, Anaconda, etc) is good to have.
- Motivating leader with good interpersonal skills.
ACADEMIC CREDENTIALS:
- PhD degree in Computer Science, Computer Engineering, Electrical Engineering.
Location:
San Jose, CA
#LI-JT1
#J-18808-Ljbffr