The Sponsored Search Data team is responsible for designing, implementing, and maintaining data pipelines, databases, and ETL processes for Walmart's Sponsored Search Advertising platform. Additionally, we collaborate with other teams within Walmart to ensure data quality, availability, and reliability for analytics and reporting for advertisers using the self-serve platform.
What you'll do:
- Minimum of 6 years of experience as a Data Engineer or in a similar role.
- Proven expertise in data engineering concepts, database design, ETL processes, and data mining.
- Proficiency in working with data technologies including SQL, Python, Spark, Scala, Hadoop, and related technologies and tools.
- Experience with ETL tools such as Apache Airflow.
- Strong skills working with relational databases (e.g., Azure SQL) and NoSQL databases (e.g., Cassandra).
- Experience with real-time message processing using Apache Kafka.
- Experience in designing and implementing data models for efficient storage and retrieval.
- A growth-oriented mindset and a willingness to raise the technical bar, as well as to identify opportunities for improving existing processes, tools, and systems to achieve high scale and productivity.
- Experience working with Docker and Kubernetes a plus.
- Familiarity with Cloud Computing Services such as Google GCP and Microsoft Azure, as well as Distributed Storage Systems like Hive and Elastic Search a plus.
- Excellent oral and written communication skills, with the ability to present to both technical and non-technical audiences.
- A strong sense of accountability and ownership, self-discipline, and focus on high-quality deliverables, and a team-oriented approach that values design thinking, efficiency, and innovation.
What you'll bring:
- Build and maintain robust, scalable data pipelines for ingesting, transforming, and storing large volumes of data to support advertising on Walmart.com and its subsidiaries.
- Work on cloud platforms like Azure and Google Cloud for data storage, processing, and analytics.
- Create and deploy large-scale, containerized applications using Docker and Kubernetes in public clouds like Google GCP and Microsoft Azure.
- Collaborate with other scrum teams, QA, Product, Program Management, and Partner-Ops, while partnering with cross-functional project development teams.
- Participate in 24/7 on-call rotations to troubleshoot production issues across cross-functional teams.
- Manage data engineering projects from design to deployment, ensuring timely delivery and meeting project goals.
- Mentor/manage software engineers and lead engineering projects.
- Coordinate, create, and complete technical design discussions to drive technical architecture.
#J-18808-Ljbffr