Technology is our how. And people are our why. For over two decades, we have been harnessing technology to drive meaningful change.
By combining world-class engineering, industry expertise and a people-centric mindset, we consult and partner with leading brands from various industries to create dynamic platforms and intelligent digital experiences that drive innovation and transform businesses.
From prototype to real-world impact - be part of a global shift by doing work that matters.
Job Description
Our data team has expertise across engineering, analysis, architecture, modeling, machine learning, artificial intelligence, and data science. This discipline is responsible for transforming raw data into actionable insights, building robust data infrastructures, and enabling data-driven decision-making and innovation through advanced analytics and predictive modeling.
As a Senior Data Engineer at Endava, you will be responsible for developing and implementing data pipelines, collaborating with data architects, and ensuring data quality and integrity. You will also support data migration projects and contribute to agile development processes.
Responsibilities:
- Work closely with the Data Analyst/Data Scientist to understand evolving needs and define the data processing flow or interactive reports.
- Discuss with the stakeholders from other teams to better understand how data flows are used within the existing environment.
- Propose solutions for the cloud-based architecture and deployment flow.
- Design and build processes, data transformation, and metadata to meet business requirements and platform needs.
- Design and propose solutions for the Relational and Dimensional Model based on platform capabilities.
- Develop, maintain, test, and evaluate big data solutions.
- Focus on production status and data quality of the data environment.
- Pioneer initiatives around data quality, integrity, and security.
Qualifications
- +5 years of experience in Data Engineering.
- Proficiency in Apache Spark.
- Proficiency in Python.
- Some experience leading IT projects and stakeholder management.
- Experience implementing ETL/ELT process and Data pipelines.
- Experience with Snowflake.
- Strong SQL scripting experience.
- Background and experience with cloud data technologies and tools.
- Familiar with data tools and technologies like:
- Spark, Hadoop, Apache beam, Dataproc or similar.
- BigQuery, Redshift, or other Data warehouse tools.
- Real-time pipelines with Kinesis or Kafka.
- Batch processing.
- Serverless processing.
- Strong analytic skills related to working with structured and unstructured data sensibilities.
Additional Information
Discover some of the global benefits that empower our people to become the best version of themselves:
- Finance: Competitive salary package, share plan, company performance bonuses, value-based recognition awards, referral bonus;
- Career Development: Career coaching, global career opportunities, non-linear career paths, internal development programmes for management and technical leadership;
- Learning Opportunities: Complex projects, rotations, internal tech communities, training, certifications, coaching, online learning platforms subscriptions, pass-it-on sessions, workshops, conferences;
- Work-Life Balance: Hybrid work and flexible working hours, employee assistance programme;
- Health: Global internal wellbeing programme, access to wellbeing apps;
- Community: Global internal tech communities, hobby clubs and interest groups, inclusion and diversity programmes, events and celebrations.
#J-18808-Ljbffr