About us:
At Eon, we are at the forefront of large-scale neuroscientific connectomic data collection. Our mission is to enable the safe and scalable development of brain emulation technology to empower humanity over the next decade, beginning with the creation of a fully emulated digital twin of a mouse within the next two years.
Role:
We are seeking an experienced Data Infrastructure Engineer to spearhead the design and implementation of our data processing pipeline. This is a hands-on role that involves both hardware procurement and deployment. The candidate will play a pivotal role in setting up and optimizing our lab’s computational hardware to ensure seamless data flow for large-scale image processing.
Responsibilities:
Pipeline Design and Implementation: Architect and implement a high-throughput data pipeline capable of processing and transmitting multi-gigabit-per-second image data streams.
Hardware Procurement and Setup: Source, acquire, and deploy the necessary hardware, including frame grabbers, switches, servers, CPUs, GPUs, and network infrastructure.
Performance Testing and Optimization: Run performance tests on the pipeline and optimize the hardware setup for minimal latency and maximum data throughput.
Collaborate Across Teams: Work closely with data acquisition and image processing teams to ensure seamless integration of data from experimental setups into machine learning models.
Troubleshooting and Maintenance: Monitor system performance, troubleshoot issues, and ensure that the hardware and software work optimally for data acquisition and processing.
Future Proofing: Design a flexible system that will scale with the increased data loads of our future projects (into Tbps and even Pbps data acquisition in the next 5 years)
Skills and Qualifications:
5+ years of experience in data pipeline engineering and hardware setup, with a strong emphasis on high-performance computing environments.
Expertise in high-throughput data systems capable of handling multi-Gbps data streams.
Demonstrated experience with procuring and configuring servers, ideally with GPUs
Network Design: Strong understanding of network architecture, ideally including experience with PCIe, CoaXPress, Ethernet, and fiber optic connectivity.
Programming: Proficiency in Python, Bash, and/or other scripting languages for automation and data handling.
Startup mindset: A proactive, solution-oriented individual with the ability to work independently on ambitious projects in a dynamic, fast-paced environment.
Nice-to-haves:
Experience with Nvidia Jetson Orin or similar devices for embedded GPU computing.
Familiarity with AWS Direct Connect or similar high-bandwidth cloud transfer technologies.
Expertise in working with multi-channel data aggregation using switches, network bonding, and storage solutions for large data centers.
Knowledge of GPU programming.
Knowledge of image processing.
Key Projects:
These are examples of projects that you would be working on when joining us:
Build a data processing pipeline that can handle 400 Gbps data from hundreds of scientific cameras, process the data in real-time using Nvidia GPUs, and transfer the results to the cloud.
Support with GPU implementation of deconvolution and compression algorithms.
Test and optimize a GPU-heavy server infrastructure that includes integrating PCIe, Ethernet, and fiber optic networking for ultra-fast data transfer.
Salary and Benefits:
Competitive salary and equity.
Opportunity to work in a cutting-edge field that impacts neuroscience and AI safety.
A chance to take ownership of a critical, high-impact infrastructure project.
#J-18808-Ljbffr