Join FlexAI:
FlexAI is revolutionizing AI computing by reengineering infrastructure at the system level. We are seeking an experienced and visionary AI IP Microarchitect to lead the microarchitecture and design of our AI accelerator IP.
Our innovative compute architecture, coupled with sophisticated software intelligence and orchestration, allows developers to leverage a diverse array of compute, resulting in efficient, more reliable computing at a fraction of the cost. This architecture ensures a well-balanced distribution of memory bandwidth, capacity, and compute density—forming the backbone of our datacenter-in-a-box concept. Enabled by our universal AI compute cloud service, our hardware solutions set new benchmarks in performance and efficiency, seamlessly integrating with our AI cloud offerings and other cloud service providers worldwide.
Position Overview
As the AI IP Microarchitect, you will be designing and developing cutting-edge IP blocks tailored for AI workloads. You not only be responsible for defining the microarchitecture for AI accelerator-specific processing units, but you will also be collaborating with AI algorithm teams, RTL designers, and verification engineers to ensure the microarchitecture meets performance, power, and area goals for various AI applications.
What you’ll do:
Microarchitecture Design:
Define the microarchitecture of IP blocks, including the specialized AI accelerator.
Optimize the microarchitecture for deep learning, machine learning, and AI workloads, focusing on performance, power efficiency, and area (PPA) trade-offs.
Pipeline Design and Optimization:
Design efficient pipelines to maximize throughput for AI computations, while minimizing latency and power consumption.
Implement techniques such as parallelism, pipelining, and data reuse to optimize performance for AI operations like matrix multiplications, convolutions, and activation functions.
Memory and Dataflow Optimization:
Architect efficient memory hierarchies within the IP block, including caches, SRAM, and register files, to support AI data movement.
Design dataflow mechanisms that maximize data locality and minimize bandwidth requirements between processing elements and memory.
Performance Modeling and Validation:
Develop performance models to analyze the behavior of the microarchitecture under different AI workloads.
Collaborate with RTL designers and verification engineers to validate the microarchitecture against functional and performance specifications.
Hardware-Software Co-Design:
Work closely with AI software teams to co-optimize the hardware and software interface, ensuring efficient execution of AI algorithms on the designed IP.
Support the integration of the IP into larger SoC designs, focusing on seamless hardware-software interaction.
Innovation and Roadmap Development:
Drive innovation in AI IP microarchitecture, staying updated with the latest developments in AI algorithms and hardware design methodologies.
Contribute to the development of the company’s AI IP roadmap, including the evaluation of emerging technologies and design strategies.
What you’ll need to be successful:
Bachelor’s, Master’s, or Ph.D. in Electrical Engineering, Computer Engineering, or a related field.
5+ years of experience in microarchitecture design, with a focus on AI accelerators, DSPs, or other specialized processing units.
Proven track record in designing and delivering IP cores for high-performance computing or AI applications.
Experience with RTL design, synthesis, and timing closure for custom IP blocks.
Strong knowledge of AI hardware architectures, including neural processing units (NPUs), tensor cores, and other AI-specific accelerators.
Proficiency in microarchitecture design principles, pipeline optimization, and memory hierarchy design.
Experience with performance modeling tools and hardware description languages (e.g., Verilog, VHDL).
Familiarity with high-level synthesis (HLS) tools and techniques.
Strong analytical and problem-solving skills with a focus on system-level optimization.
Ability to work effectively in cross-functional teams, including hardware, software, and verification groups.
Excellent communication skills for both technical and non-technical audiences.
Preferred Skills
Experience with AI/ML software frameworks (e.g., TensorFlow, PyTorch) and their mapping to hardware.
Familiarity with AI workloads such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), or transformers.
Knowledge of advanced packaging techniques, such as chiplet designs or 3D stacking.
What we offer:
- A competitive salary and benefits package, tailored to recognize your dedication and contributions.
- The opportunity to collaborate with leading experts in AI and cloud computing, learning from the best and the brightest, fostering continuous growth.
- An environment that values innovation, collaboration, and mutual respect.
- Support for personal and professional development, empowering you with the tools and resources to elevate your skills and leave a lasting impact.
- A pivotal role in the AI revolution, shaping the technologies that power the innovations of tomorrow.
About FlexAI:
Founded by Brijesh Tripathi and Dali Kilani, who bring experience from Nvidia, Apple, Tesla, Intel, Lifen, and Zoox, FlexAI is not just building a product – we’re shaping the future of AI.
Apply NOW!
You’ve seen what this role entails. Now we want to hear from you! Does this opportunity align with your aspirations? If you’re even slightly curious, we encourage you to apply – it could be the start of something extraordinary!
At FlexAI, we believe diverse teams are the most innovative teams. We’re committed to creating an inclusive environment where everyone feels valued, and we proudly offer equal opportunities regardless of gender, sexual orientation, origin, disabilities, veteran status, or any other facets of your identity that make you uniquely you.
#J-18808-Ljbffr