Senior Data Engineer - (Data pipelines, Payload design) Java, Apache Beam/Flink, Kafka, and SQL

4 Days ago • 7 Years +

About the job

SummaryBy Outscal

About the job:
We are seeking a highly skilled Senior Data Engineer with a strong focus on building robust data streaming pipelines and effective payload design for both batch and streaming use cases. The ideal candidate will have over 7 years of experience in data engineering, with proficiency in Java, Apache Beam or Flink, Kafka, and SQL. If you are passionate about data engineering and thrive in a collaborative Agile environment, we invite you to apply! **Roles And Responsibilities** * **Data Streaming Pipeline Development:** Build and maintain data streaming pipelines that support both batch and stream use cases, ensuring optimal performance and reliability. Focus on effective payload design to ensure data integrity and performance. * **Data Analysis And Design:** Conduct data analysis to inform the design of data pipelines and optimize data flows. Develop data models that align with business requirements and ensure scalability. * **Documentation:** Create and maintain thorough documentation of data pipeline architectures, design decisions, and operational processes. * **Observability:** Ensure proper observability is embedded into the design of data pipelines, allowing for seamless monitoring and troubleshooting. * **DevSecOps Mindset:** Embrace a DevSecOps mindset: you build it, you run it, taking ownership of deployed pipelines and managing their performance in production. * **Agile Participation:** Actively participate in Agile ceremonies, including grooming user stories, estimation, and sprint planning to ensure alignment with project goals. * **Testing:** Conduct unit, functional, integration, and non-functional testing to validate data pipeline functionality and performance.
Must have:
  • Proficient in Java programming language.
  • Experience with Apache Beam or Flink for building data processing pipelines.
  • Familiarity with source code management using GitHub.
  • Experience in consuming API endpoints for data integration and processing.
  • Proficient with at least one enterprise-grade relational database (e.g., Oracle, SQL Server, PostgreSQL) and at least one NoSQL database.
  • Knowledge of Kafka for real-time data processing and streaming.
  • Intermediate level SQL skills for querying and managing data.
  • Experience with Maven for project management and build automation.
Good to have:
  • Familiarity with Java JDK 17.
  • Experience with AWS Managed Service for Flink, AWS EKS, AWS Aurora PostgreSQL, and AWS ElastiCache.
  • Knowledge of GitHub Actions and Docker for managing deployment pipelines.
  • Proficiency with OpenTelemetry for observability and diagnosing issues within data pipelines.
  • Experience with Debezium for change data capture and streaming of database changes.
  • An Associate-level AWS certification

About the job

This job is with Synechron, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly.

Overview

We are seeking a highly skilled Senior Data Engineer with a strong focus on building robust data streaming pipelines and effective payload design for both batch and streaming use cases. The ideal candidate will have over 7 years of experience in data engineering, with proficiency in Java, Apache Beam or Flink, Kafka, and SQL. If you are passionate about data engineering and thrive in a collaborative Agile environment, we invite you to apply!

Roles And Responsibilities

Data Streaming Pipeline Development:

  • Build and maintain data streaming pipelines that support both batch and stream use cases, ensuring optimal performance and reliability.
  • Focus on effective payload design to ensure data integrity and performance.

Data Analysis And Design

  • Conduct data analysis to inform the design of data pipelines and optimize data flows.
  • Develop data models that align with business requirements and ensure scalability.

Documentation: Create and maintain thorough documentation of data pipeline architectures, design decisions, and operational processes.

Observability: Ensure proper observability is embedded into the design of data pipelines, allowing for seamless monitoring and troubleshooting.

DevSecOps Mindset: Embrace a DevSecOps mindset: you build it, you run it, taking ownership of deployed pipelines and managing their performance in production.

Agile Participation: Actively participate in Agile ceremonies, including grooming user stories, estimation, and sprint planning to ensure alignment with project goals.

Testing: Conduct unit, functional, integration, and non-functional testing to validate data pipeline functionality and performance.

Must-Have Skills

  • Programming Language: Proficient in Java programming language.
  • Streaming Technologies: Experience with Apache Beam or Flink for building data processing pipelines.
  • Source Code Management: Familiarity with source code management using GitHub.
  • API Consumption: Experience in consuming API endpoints for data integration and processing.
  • Database Management: Proficient with at least one enterprise-grade relational database (e.g., Oracle, SQL Server, PostgreSQL) and at least one NoSQL database.
  • Messaging Systems: Knowledge of Kafka for real-time data processing and streaming.
  • SQL Skills: Intermediate level SQL skills for querying and managing data.
  • Build Tools: Experience with Maven for project management and build automation.

Preferred Skills

  • Java Version: Familiarity with Java JDK 17.
  • AWS Services: Experience with AWS Managed Service for Flink, AWS EKS, AWS Aurora PostgreSQL, and AWS ElastiCache.
  • CI/CD Tools: Knowledge of GitHub Actions and Docker for managing deployment pipelines.
  • Monitoring and Observability: Proficiency with OpenTelemetry for observability and diagnosing issues within data pipelines.
  • Data Solutions: Experience with Debezium for change data capture and streaming of database changes.
  • Certification: An Associate-level AWS certification is a plus.

Qualifications

  • Educational Background:
    • Bachelor's degree in Computer Science, Information Technology, or a related field.
    • A Master's degree in Computer Science, Information Technology, or a related field is a plus.
S YNECHRON'S DIVERSITY & INCLUSION STATEMENT

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative 'Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

View Full Job Description

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug