AVP Data Engineering - GE05AE
We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.
We are seeking a highly skilled and experienced Data and Technology leader, Assistant Vice President (AVP) of Data Engineering to join our dynamic team. The ideal candidate will have a strong background in real-time data streaming, expertise in graph knowledge bases, familiarity with AI Retrieval-Augmented Generation (RAG) architectures, and a proven track record of enabling self-serve analytics and AI use cases. As the AVP of Data Engineering, you will lead a team of data engineers to design, build, and maintain scalable data pipelines and infrastructure that support our company's strategic objectives and empower our stakeholders with data-driven insights.
This role will have a Hybrid work arrangement, with the expectation of working in an office location (Hartford, CT; Charlotte, NC; Chicago, IL; Columbus, OH) 3 days a week (Tuesday through Thursday). Candidate must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I-983 Training Plan endorsement for this position.
Key Responsibilities:
Leadership & Strategy:
- Lead and manage a team of data engineers, providing technical guidance, mentoring, and career development opportunities.
- Collaborate with cross-functional teams, including data science, product development, and IT, to align data engineering initiatives with business goals.
- Develop and implement strategies for data infrastructure that support real-time data streaming, graph databases, and AI-powered applications.
- Be a thought leader, driving positive change and simplification while improving delivery speed.
Enable Self-Serve Use Cases:
- Design and implement self-serve data platforms that empower business users to access, analyze, and leverage data without heavy reliance on engineering teams.
- Enable data democratization by building data marketplaces and enabling NL on structured data and unstructured data (Generative BI).
Support AI Use Cases:
- Collaborate with AI and data science teams to develop and support AI use cases, ensuring data pipelines and infrastructure are tailored to meet the demands of AI/ML workflows.
- Facilitate the integration of AI capabilities into business processes by providing robust and scalable data solutions.
Real-Time Data Streaming:
- Design, build, and maintain scalable and robust real-time data streaming pipelines using technologies such as Apache Kafka, AWS Kinesis, Spark streaming or similar.
- Ensure the efficient ingestion, processing, and delivery of data to various stakeholders and applications in real-time.
Graph Knowledge Base:
- Architect, implement, and maintain graph databases (e.g., Neo4j, Amazon Neptune) to support complex data relationships and queries.
- Integrate graph databases with other data platforms and applications to enhance data connectivity and accessibility.
AI RAG Architectures:
- Collaborate with data scientists and AI engineers to develop and deploy Retrieval-Augmented Generation (RAG) systems that enhance AI-driven applications.
- Ensure data pipelines are optimized for AI/ML workflows, including data preparation, feature engineering, and model deployment and enable feature stores.
Unstructured Data Mining:
- Develop and implement strategies for mining and analyzing unstructured data (e.g., text, images, video) to extract valuable insights.
- Integrate unstructured data with structured data sources to provide a holistic view of business operations and opportunities.
Data Governance:
- Experience in implementing data governance, Data Quality, Lineage, Data Catalogue capture etc. holistically, strategically, dynamically on a large-scale data platform.
Reliability Engineering:
- Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
- Implement best practices in reliability engineering, including redundancy, fault tolerance, and disaster recovery strategies.
- Work closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
Automation and Scalability:
- Ensure that automation and data strategy implementations are scalable, reusable, and integrated.
Qualifications:
- 15+ years of experience in data engineering, design and development of large-scale data ecosystems and delivery experience.
- 10+ years of technical leadership.
- Mastery level Data Engineering and Architecture skills – a deep understanding of data architecture patterns data warehouse, integration, data lake, data domains, data products, BI and cloud technology capabilities.
- Hands-on expertise in: Dynamo DB (No SQL dbs), Kinesis, AWS EC2, Python, Spark Scala Spark streaming experience is a MUST.
- Good hands-on experience in unstructured data mining and content summarization.
- Strong experience with design and development of complex data ecosystems leveraging next generation of cloud technology stack across AWS Cloud, PySpark and Snowflake.
- Strong experience on near real-time data processes and handling ELT/data transformation on streaming data.
- Ingesting and curating huge volumes of data from various platforms for Digital needs (APIs), Reporting, Analytics, Transactional (operating data store and APIs) needs.
- Extensive experience on No SQL databases design and optimizations for API consumptions.
- Think outside the box to create reusable frameworks to perform complex transformations, data quality, profiling, that are metadata driven.
- Enable frameworks to automate validations and testing.
- Performance tuning and problem-solving skill is a MUST.
- Broad knowledge across multiple technologies including big-data based solutions, microservices, container, cloud-based solutions and integration methodologies.
- Strong experience with BI tools like Tableau, ThoughtSpot or similar.
- In-depth knowledge of Data management strategies, principles and practices including experience with framework for data quality, data governance, data domains, data products, stewardship and metadata management.
- Monitoring performance and advising any necessary infrastructure changes.
- Experience in the Omni Channel Customer insights and Property & Casualty insurance industry is preferred.
- Understanding of enterprise data models and utilization of transactional and relational data from sources systems like Contact Center.
- Agile experience/mindset. Portfolio delivery experience, engaging in the full lifecycle experience, from planning to delivery and management of the solution lifecycle to produce solution designs that are viable and can be successfully constructed, implemented, operated, and managed.
Compensation:
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford’s total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is: $176,000 - $264,000.
Equal Opportunity Employer/Females/Minorities/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
#J-18808-Ljbffr