Data Engineer I

11 Minutes ago • 1-3 Years
Data Analysis

Job Description

As a Data Engineer I, you will play a crucial role in developing, optimizing, and managing our company's data infrastructure, ensuring the availability and reliability of data for analysis and reporting. Your responsibilities will include designing and maintaining database systems, developing and maintaining ETL processes, creating data models, building and managing data warehouses, and integrating data from various sources. You will also focus on data quality, governance, scripting, and working with cloud platforms.
Good To Have:
  • Knowledge of Flink.
  • Knowledge of Airflow.
  • Knowledge of DBT.
  • Experience working with Kubernetes.
  • Experience working with distributed SQL engines like Athena / Presto.
  • Ability to pick the right tools for building reliable, scalable and maintainable systems.
Must Have:
  • Design, implement, and maintain database systems.
  • Develop and maintain ETL processes to move and transform data.
  • Create and update data models to represent data structure.
  • Build and manage data warehouses for large datasets.
  • Integrate data from various sources including APIs and databases.
  • Implement and enforce data quality standards and governance.
  • Develop and automate data processes using Python, Java, or SQL.
  • Use version control systems like Git for codebase changes.
  • Work with cloud platforms (AWS, Azure, GCP) for data infrastructure.
  • 1-3 years of experience in data engineering, BI engineering, or data warehouse development.
  • Excellent command of Python or Java.
  • Excellent SQL skills.
  • Strong knowledge of architecture & internals of Apache Spark.
  • Experience in building ETL Data Pipelines.

Add these skills to join the top 1% applicants for this job

cross-functional
problem-solving
github
game-texts
aws
azure
spark
kubernetes
git
python
sql
java

About Zeta

Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015.

Our flagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 20M+ cards have been issued on our platform globally.

Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios.

Zeta has over 1700+ employees - with over 70% roles in R&D - across locations in the US, EMEA, and Asia. We raised $280 million at a $1.5 billion valuation from Softbank, Mastercard, and other investors in 2021.

Learn more @ www.zeta.tech, careers.zeta.tech, Linkedin, Twitter

About the Role

As a Data Engineer I, you will play a crucial role in developing, optimizing, and managing our company's data infrastructure, ensuring the availability and reliability of data for analysis and reporting.

Responsibilities

  • Database Design and Management: Design, implement, and maintain database systems. Optimize database performance and ensure data integrity. Troubleshoot and resolve database issues.
  • ETL (Extract, Transform, Load) Processes: Develop and maintain ETL processes to move and transform data between systems. Ensure the efficiency and reliability of data pipelines.
  • Data Modeling: Create and update data models to represent the structure of the data.
  • Data Warehousing: Build and manage data warehouses for storage and analysis of large datasets.
  • Data Integration: Integrate data from various sources, including APIs, databases, and external data sets.
  • Data Quality and Governance: Implement and enforce data quality standards. Contribute to data governance processes and policies.
  • Scripting and Programming: Develop and automate data processes through programming languages (e.g., Python, Java, SQL). Implement data validation scripts and error-handling mechanisms.
  • Version Control: Use version control systems (e.g., Git) to manage codebase changes for data pipelines.
  • Monitoring and Optimization: Implement monitoring solutions to track the performance and health of data systems. Optimize data processes for efficiency and scalability.
  • Cloud Platforms: Work with cloud platforms (e.g., AWS, Azure, GCP) to deploy and manage data infrastructure. Utilize cloud-based services for data storage, processing, and analytics.
  • Security: Implement and adhere to data security best practices. Ensure compliance with data protection regulations..
  • Troubleshooting and Support: Provide support for data-related issues and participate in root cause analysis.

Skills

  • Data Modeling and Architecture: Design and implement scalable and efficient data models, Develop and maintain conceptual, logical, and physical data models.
  • ETL Development: Create, optimize, and maintain ETL processes to efficiently move data across systems, Implement data transformations and cleansing processes to ensure data accuracy and integrity.
  • Data Warehouse Management: Contribute to the design and maintenance of data warehouses.
  • Data Integration: Work closely with cross-functional teams to integrate data from various sources and Implement solutions for real-time and batch data integration.
  • Data Quality and Governance: Establish and enforce data quality standards.
  • Performance Tuning: Monitor and optimize database performance for large-scale data sets, troubleshoot and resolve issues related to data processing and storage.

Experience and Qualification

  • Bachelor’s/master’s degree in engineering (computer science, information systems) with 1-3 years of experience in data engineering, BI engineering, and data warehouse development.
  • Excellent command of one or more programming languages, preferably Python or Java.
  • Excellent SQL skills.
  • Knowledge of Flink, Airflow.
  • Knowledge of DBT.
  • Experience working with Kubernetes.
  • Strong knowledge of architecture & internals of Apache Spark with multiple years of hand-on experience.
  • Experience working with distributed SQL engines like Athena / Presto.
  • Experience in building ETL Data Pipelines.
  • Ability to cut through the buzzwords and pick the right tools for building systems centered on core principles of reliability, scalability and maintainability.

Equal Opportunity

Zeta is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We encourage applicants from all backgrounds, cultures, and communities to apply and believe that a diverse workforce is key to our success

Set alerts for more jobs like Data Engineer I
Set alerts for new jobs by zeta
Set alerts for new Data Analysis jobs in India
Set alerts for new jobs in India
Set alerts for Data Analysis (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙