Senior Data Engineer- (Big data and Data Pipelines)- Delhi, India

52 Minutes ago • 5-9 Years
Data Analysis

Job Description

Findem is a talent data platform that combines 3D data with AI, automating and consolidating top-of-funnel activities across the talent ecosystem. It integrates sourcing, CRM, and analytics, leveraging 3D data to provide comprehensive career insights. Findem is seeking an experienced Big Data Engineer responsible for building, deploying, and managing data pipelines, data lakes, and Big Data processing solutions using various Big Data and ETL technologies. The role requires strong core engineering skills and proficiency in AI-assisted coding tools.
Good To Have:
  • Exposure to NoSQL like MongoDB
  • Exposure to Cloud platforms like AWS, GCP
  • Exposure to Microservices architecture
  • Exposure to Machine learning techniques
Must Have:
  • Build data pipelines, Big data processing solutions and data lake infrastructure
  • Assemble and process large, complex data sets from various sources like MongoDB, S3, Server-to-Server, Kafka
  • Build analytical tools to provide actionable insights and interactive ad-hoc query self-serve tools
  • Build data models and data schema for performance, scalability and functional requirements
  • Build processes supporting data transformation, metadata, dependency and workflow management
  • Research, experiment and prototype new tools/technologies
  • Strong in Python/Scala
  • Highly proficient in AI-assisted coding, with fluency in tools like Cline, Cursor
  • Experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka
  • Experience in various file formats like parquet, JSON, Avro, orc
  • Experience in workflow management tools like airflow
  • Experience with batch processing, streaming and message queues
  • Experience in visualization tools like Redash, Tableau, Kibana
  • Experience in working with structured and unstructured data sets
  • Strong problem solving skills
Perks:
  • full benefits

Add these skills to join the top 1% applicants for this job

performance-analysis
data-analytics
talent-acquisition
game-texts
aws
nosql
kibana
hadoop
tableau
spark
json
mongodb
microservices
python
scala
sql
machine-learning

What is Findem:

Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai

Experience - 5 - 9 years

We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies.

Alongside strong core engineering skills, candidates must be highly proficient in AI-assisted coding. Fluency with AI tools such as Cline, Cursor, or similar is expected as part of a modern engineering workflow. They should be able to use these tools effectively, write good prompts, and guide the AI in a smart and responsible manner to produce high-quality, maintainable code.

Responsibilities

  • Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies
  • Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies
  • Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases
  • Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management
  • Research, experiment and prototype new tools/technologies and make them successful

Skill Requirements

  • Must have-Strong in Python/Scala
  • Must be highly proficient in AI-assisted coding. Fluency with AI tools such as Cline, Cursor, or similar is expected as part of a modern engineering workflow. They should be able to use these tools effectively, write good prompts, and guide the AI in a smart and responsible manner to produce high-quality, maintainable code.
  • Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc
  • Experience in various file formats like parquet, JSON, Avro, orc etc
  • Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues
  • Any of visualization tools like Redash, Tableau, Kibana etc
  • Experience in working with structured and unstructured data sets
  • Strong problem solving skills

Good to have

  • Exposure to NoSQL like MongoDB
  • Exposure to Cloud platforms like AWS, GCP, etc
  • Exposure to Microservices architecture
  • Exposure to Machine learning techniques

The role is full-time and comes with full benefits.

Equal Opportunity

As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.

We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Set alerts for more jobs like Senior Data Engineer- (Big data and Data Pipelines)- Delhi, India
Set alerts for new jobs by Findem
Set alerts for new Data Analysis jobs in India
Set alerts for new jobs in India
Set alerts for Data Analysis (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙