Hadrian — Manufacturing the Future
Hadrian is building autonomous factories that help aerospace and defense companies make rockets, jets, and satellites 10x faster and 2x cheaper. Our CEO, Chris Power, discusses the importance of what we're building in this video.
We are a lean but mighty team (and growing!) of people that are passionate about building critical infrastructure to support the nation and the advancement of humanity.
To support our ambitious vision, we have raised > $200M from Lux Capital, A16Z, Founders Fund, Construct Capital, Caffeinated Capital, and more.
About This Role
As a foundational software engineer on our data platform engineering team, you will lead the charge on a variety of projects architecting our cloud and on-premises infrastructure to aggregate, store, and make sense of manufacturing processes and data. You will play a crucial role in shaping our infrastructure strategy, including creating plans for system scalability to support the expansion to hundreds of factories.
Examples of possible work include everything from building Kubernetes clusters for ERP and machine data processing, building software agents to monitor system availability and push logs to Datadog, spinning up Postgres databases that store terabytes of data, or standing up a robust data warehousing solution that handles all of our data visibility needs.
You will be challenged to think creatively and solve complex integration problems. You will work cross-functionally with production experts, software engineers, and machining specialists to develop novel solutions working toward fully automated factories.
In this role you will
- Scope, architect, implement, and deploy critical applications that will drive revenue and make a positive impact in the world.
- Build and manage a robust cluster of databases and write software to coordinate and deploy hundreds of data services.
- Conceptualize and own the architecture for multiple large-scale infrastructure projects.
- Create and contribute to frameworks that span on-premises and cloud infrastructure improve the efficacy, reliability, and traceability of while working with data engineers to triage and resolve production issues.
- Solve our most challenging deployment and orchestration problems, utilizing optimal build tooling, frameworks, architectural patterns.
- Collaborate with software and data engineers, product managers, and data scientists to understand platform needs, and manage infrastructure as code using tools like Terraform and Packer, and monitor system performance using DataDog.
- Work with data engineers and software engineers to manage and scale data streaming platforms like Kafka and Redpanda to support high-throughput data processing.
- Get to build alongside an incredible team of software engineers, mechanical engineers, operators, and the best machinists/CAM programmers in the world.
This might be a good fit if you
- Have extensive experience shipping modern, data-centric applications (our data systems use Argo-Workflows, Dagster, Superset, Aurora, RDS, S3, and back-ends are Go and Python, with gRPC/Avro and Kafka as our messaging platform).
- Have experience with IaC and GitOps tooling (We use Terraform extensively and have centralized on Kubernetes/Argo/Helm).
- Well versed with data querying and optimization techniques across NoSQL and SQL platforms.
- Are proficient with build tools and infrastructure Buildkite/GHA/CircleCI, ArgoCD/ArgoWorkflows and Env0/TFCloud/Spacelift.
- Have experience with AWS services and tools, including EC2, S3, Aurora, IAM, and others.
- Work with a platform mentality -- driven to find the right architecture and plan up front and solve problems with the long term in mind.
- Are excited to work in a fast-paced environment with high-stakes and quick iteration cycles.
Nice to have (or excited to learn! You don’t have to possess these to be a great fit.)
- Familiarity with distributed data processing and message queuing like Kafka/Redpanda and Differential Data Flow.
- PostgreSQL Optimization expertise.
- Understanding of network technologies such as VLANs and Network-Attached Storage (NAS) systems.
- Experience with implementing business continuity plans around infrastructure.
- Expertise in managing data around regulatory conformances.
$205,000 - $235,000 a year
ITAR Requirements
To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.
#J-18808-Ljbffr