Data Engineer

1 Day ago • 8 Years + • Data Analysis

Job Summary

Job Description

Synechron is seeking a highly skilled and experienced Senior Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains.
Must have:
  • Design, develop, and maintain scalable, resilient data pipelines and architectures.
  • Collaborate with data scientists, analysts, and cross-functional teams.
  • Ensure data quality, consistency, and security.
  • Optimize data processes for performance, reliability, and cost efficiency.
  • Integrate data from multiple sources, including cloud data services and streaming platforms.
  • Lead efforts in performance tuning and troubleshooting data pipelines.
  • Stay up-to-date with emerging data engineering technologies.
  • Hands-on experience with Databricks notebooks, clusters, and workflows.
  • Proficient in developing and optimizing Spark jobs with PySpark.
  • Advance proficiency in writing complex SQL queries and optimizing queries.
  • Experience in scheduling and managing data workflows with Apache Airflow or similar.
  • Experience with cloud environments such as AWS, Azure, or Google Cloud.
  • Experience with Data Warehousing Solutions (Snowflake highly preferred).
  • Minimum of 8 years of professional experience in Data Engineering.
  • Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL.
  • Practical experience in data modeling, data warehousing, and ETL/ELT workflows.
  • Experience working with cloud data platforms and streaming data frameworks such as Kafka.
  • Demonstrated ability to work with cross-functional teams.
  • Experience with data orchestration and automation tools.
  • Essential programming languages: Python, SQL.
  • Essential databases/data management: Data modeling, ETL/ELT processes, data warehousing.
  • Essential cloud technologies: Cloud data services (AWS, Azure, GCP) and deployment of data pipelines.
  • Essential frameworks and libraries: PySpark, Spark SQL, Kafka, Airflow.
  • Essential development tools: Version control (Git), CI/CD pipelines, Agile methodologies.
  • Familiarity with data security, encryption standards, and compliance best practices.
Good to have:
  • Kafka or other streaming frameworks (e.g., Confluent, MQTT).
  • CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI).
  • DevOps practices for data workflows.
  • Familiarity with Java or Scala.
  • NoSQL databases, Hadoop ecosystem.
  • Cloud native data tools and architecture design.
  • Streaming frameworks, TensorFlow (for data prep).
  • DevOps practices in data engineering, containerization (Docker, Kubernetes).
  • Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows.
  • Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent.
  • Continuous learning through courses, workshops, or industry conferences.

Job Details

Job Summary

Synechron is seeking a highly skilled and experienced Senior Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains.

Software Requirements

Required Software & Tools:

  • Databricks Platform (Hands-on experience with Databricks notebooks, clusters, and workflows)
  • PySpark (Proficient in developing and optimizing Spark jobs)
  • SQL (Advance proficiency in writing complex SQL queries and optimizing queries)
  • Data Orchestration Tools such as Apache Airflow or similar (Experience in scheduling and managing data workflows)
  • Cloud Data Platforms (Experience with cloud environments such as AWS, Azure, or Google Cloud)
  • Data Warehousing Solutions (Snowflake highly preferred)

Preferred Software & Tools:

  • Kafka or other streaming frameworks (e.g., Confluent, MQTT)
  • CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI)
  • DevOps practices for data workflows
  • Programming Languages: Python (Expert level), and familiarity with other languages like Java or Scala is advantageous

Overall Responsibilities

  • Architect, develop, and maintain scalable, resilient data pipelines and architectures supporting business analytics, reporting, and data science use cases.
  • Collaborate closely with data scientists, analysts, and cross-functional teams to gather requirements and deliver optimized data solutions aligned with organizational goals.
  • Ensure data quality, consistency, and security across all data workflows, adhering to best practices and compliance standards.
  • Optimize data processes for enhanced performance, reliability, and cost efficiency.
  • Integrate data from multiple sources, including cloud data services and streaming platforms, ensuring seamless data flow and transformation.
  • Lead efforts in performance tuning and troubleshooting data pipelines to resolve bottlenecks and improve throughput.
  • Stay up-to-date with emerging data engineering technologies and contribute to continuous improvement initiatives within the team.

Technical Skills (By Category)

Programming Languages:

  • Essential: Python, SQL
  • Preferred: Scala, Java

Databases/Data Management:

  • Essential: Data modeling, ETL/ELT processes, data warehousing (Snowflake experience highly preferred)
  • Preferred: NoSQL databases, Hadoop ecosystem

Cloud Technologies:

  • Essential: Experience with cloud data services (AWS, Azure, GCP) and deployment of data pipelines in cloud environments
  • Preferred: Cloud native data tools and architecture design

Frameworks and Libraries:

  • Essential: PySpark, Spark SQL, Kafka, Airflow
  • Preferred: Streaming frameworks, TensorFlow (for data prep)

Development Tools and Methodologies:

  • Essential: Version control (Git), CI/CD pipelines, Agile methodologies
  • Preferred: DevOps practices in data engineering, containerization (Docker, Kubernetes)

Security Protocols:

  • Familiarity with data security, encryption standards, and compliance best practices

Experience Requirements

  • Minimum of 8 years of professional experience in Data Engineering or related roles
  • Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL
  • Practical experience in data modeling, data warehousing, and ETL/ELT workflows
  • Experience working with cloud data platforms and streaming data frameworks such as Kafka or equivalent
  • Demonstrated ability to work with cross-functional teams, translating business needs into technical solutions
  • Experience with data orchestration and automation tools is highly valued
  • Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows (preferred)

Day-to-Day Activities

  • Design, develop, and troubleshoot data pipelines for ingestion, transformation, and storage of large datasets
  • Collaborate with data scientists and analysts to understand data requirements and optimize existing pipelines
  • Automate data workflows and improve pipeline efficiency through performance tuning and best practices
  • Conduct data quality audits and ensure data security protocols are followed
  • Manage and monitor data workflows, troubleshoot failures, and implement fixes proactively
  • Contribute to documentation, code reviews, and knowledge sharing within the team
  • Stay informed of evolving data engineering tools, techniques, and industry best practices, incorporating them into daily work processes

Qualifications

  • Bachelor's or Master's degree in Computer Science, Information Technology, or related field
  • Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent (preferred)
  • Continuous learning through courses, workshops, or industry conferences on data engineering and cloud technologies

Professional Competencies

  • Strong analytical and problem-solving skills with a focus on scalable solutions
  • Excellent communication skills to effectively collaborate with technical and non-technical stakeholders
  • Ability to prioritize tasks, manage time effectively, and deliver within tight deadlines
  • Demonstrated leadership in guiding team members and driving project success
  • Adaptability to evolving technological landscapes and innovative thinking
  • Commitment to data privacy, security, and ethical handling of information

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Bengaluru, Karnataka, India

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Data Analysis Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies spanend-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in ourFinLabswe develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more.Over the last20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of14,500+,and has58offices in 21 countrieswithin key global markets.For more information on the company, please visit ourwebsiteorLinkedIncommunity.

Bengaluru, Karnataka, India (On-Site)

Bengaluru, Karnataka, India (On-Site)

Bengaluru, Karnataka, India (On-Site)

New York, New York, United States (On-Site)

Montreal, Quebec, Canada (Hybrid)

Mississauga, Ontario, Canada (Hybrid)

Charlotte, North Carolina, United States (On-Site)

Pune, Maharashtra, India (On-Site)

Bengaluru, Karnataka, India (On-Site)

View All Jobs

Get notified when new jobs are added by Synechron

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug