Senior Data Engineer

4 Minutes ago • 2-5 Years
Data Analysis

Job Description

Loyalty Juggernaut is seeking a Senior Data Engineer to build and maintain optimal data pipeline architecture and infrastructure using SQL and AWS 'big data' technologies. The role involves working with stakeholders to resolve data-related technical issues, supporting data infrastructure needs, and creating tools for data management and analytics to enhance their AI-driven SaaS loyalty product, GRAVTY®. The ideal candidate will have 2-5 years of backend development experience with Python and strong data skills.
Good To Have:
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift, and AWS Batch.
Must Have:
  • 2 to 5 years of relevant backend development experience with Python.
  • Strong skills in Data Structures and Algorithms.
  • Familiarity with database systems, including PostgreSQL and NoSQL solutions like MongoDB or DynamoDB.
  • Hands-on experience using Cloud Datawarehouses like AWS Redshift, GBQ.
  • Solid understanding of ETL processes and tools.
  • Experience managing or building data pipelines and architectures at scale.
  • Understanding of data ingestion, transformation, storage, and analytics workflows.
  • Ability to communicate clearly and work collaboratively across engineering and product.
Perks:
  • Dynamic and supportive work environment.
  • Opportunity to collaborate with talented technocrats.
  • Work with globally recognized brands.
  • Gain exposure and carve your own career path.
  • Innovate and dabble in future technologies like Enterprise Cloud Computing, Blockchain, Machine Learning, AI, Mobile, Digital Wallets.

Add these skills to join the top 1% applicants for this job

saas-business-models
data-analytics
data-structures
game-texts
postgresql
aws
nosql
mongodb
python
sql
algorithms
machine-learning

At Loyalty Juggernaut, we’re on a mission to revolutionize customer loyalty through AI-driven SaaS solutions. We are THE JUGGERNAUTS, driving innovation and impact in the loyalty ecosystem with GRAVTY®, our SaaS Product that empowers multinational enterprises to build deeper customer connections. Designed for scalability and personalization, GRAVTY® delivers cutting-edge loyalty solutions that transform customer engagement across diverse industries including Airlines, Airport, Retail, Hospitality, Banking, F&B, Telecom, Insurance and Ecosystem.

Our Impact:

  • 400+ million members connected through our platform.
  • Trusted by 100+ global brands/partners, driving loyalty and brand devotion worldwide.

Proud to be a Three-Time Champion for Best Technology Innovation in Loyalty!!

Explore more about us at www.lji.io

What you will OWN:

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from various sources using SQL and AWS ‘big data’ technologies.
  • Create and maintain optimal data pipeline architecture.
  • Identify, design, and implement internal process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with stakeholders, including the Technical Architects, Developers, Product Owners, and Executives, to assist with data-related technical issues and support their data infrastructure needs.
  • Create tools for data management and data analytics that can assist them in building and optimizing our product to become an innovative industry leader.

You would make a GREAT FIT if you have:

  • Have 2 to 5 years of relevant backend development experience, with solid expertise in Python.
  • Possess strong skills in Data Structures and Algorithms, and can write optimized, maintainable code.
  • Are familiar with database systems, and can comfortably work with PostgreSQL, as well as NoSQL solutions like MongoDB or DynamoDB.
  • Hands-on experience using Cloud Dataware houses like AWS Redshift, GBQ, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift, and AWS Batch would be an added advantage.
  • Have a solid understanding of ETL processes and tools and can build or modify ETL pipelines effectively.
  • Have experience managing or building data pipelines and architectures at scale.
  • Understand the nuances of data ingestion, transformation, storage, and analytics workflows.
  • Communicate clearly and work collaboratively across engineering, product.

Why Choose US?

  • This opportunity offers a dynamic and supportive work environment where you'll have the chance to not just collaborate with talented technocrats but also work with globally recognized brands, gain exposure, and carve your own career path.
  • You will get to innovate and dabble in the future of technology -Enterprise Cloud Computing, Blockchain, Machine Learning, AI, Mobile, Digital Wallets, and much more.

Set alerts for more jobs like Senior Data Engineer
Set alerts for new jobs by Loyalty Juggernaut
Set alerts for new Data Analysis jobs in India
Set alerts for new jobs in India
Set alerts for Data Analysis (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙