logo
Lead Data Engineer

Raipur, India, Remote

Exp 6-8 YearsFull Time

Share Your CV

[email protected]

Job Description

Join our team as a Data Engineer and embark on an exciting journey of innovation and impact! As part of our dynamic team, you will spearhead the creation of cutting-edge Kafka pipelines and drive the generation of insightful Power BI reports. Your mission? To revolutionize our retail operations by establishing a centralized reporting system that provides 100% visibility into our B2B and B2C sales. Through data-driven decision-making, you will lead the charge in optimizing our processes, eliminating bottlenecks, and unlocking new levels of efficiency. Are you ready to make a meaningful difference and shape the future of retail with us?

Responsibilities
  • Design, develop, and maintain scalable data pipelines and infrastructure for efficient processing of large volumes of data.
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.
  • Implement streaming data processing solutions using Kafka, Spark Connector, and other relevant technologies.
  • Develop and maintain schemas using Schema Registry for data governance and compatibility.
  • Design and optimize data models for storage and retrieval in distributed databases like Metabase, Trino, or Presto.
  • Implement serverless data processing solutions using AWS Lambda for real-time and batch data processing.
  • Ensure data quality and integrity through rigorous testing and validation procedures.
  • Monitor and troubleshoot data pipelines to ensure optimal performance and reliability.
  • Stay updated with emerging technologies and industry trends to continually improve data engineering practices.
Requirement
  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • Proven experience as a Data Engineer with a strong understanding of data processing concepts and techniques.
  • Hands-on experience with Kafka, Spark Connector, and Schema Registry for building real-time data pipelines.
  • Proficiency in distributed query engines like Trino or Presto for data analytics and ad-hoc querying.
  • Experience with cloud platforms, particularly AWS, and implementing serverless architectures using AWS Lambda.
  • Strong SQL skills and experience with relational and NoSQL databases.
  • Familiarity with data warehousing concepts and technologies.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.

Qualifications
  • Experience with containerization technologies like Docker and orchestration tools like Kubernetes.
  • Knowledge of data visualization tools and techniques.
  • Familiarity with Agile development methodologies.
GES is an equal opportunity employer. We celebrate diversity and remain committed to establishing an inclusive environment for all employees.
Similar Jobs