Data Engineer II
Condé Nast
Job Summary
The Data Engineer II will work with team members to fulfill technical needs, owning development and unit testing of data products, analytics, and data engineering solutions. Responsibilities include building efficient code to transform raw data into datasets for analysis, reporting, and data models. The role requires collaboration with other data engineers to implement a shared technical vision and participation in the entire software development lifecycle, from concept to release. The engineer will also discuss with the PO team, create JIRA tickets, and update work logs.
Must Have
- Work with team members to enable them to fulfill technical needs and own development/unit testing of data-products, analytics, and data engineering.
- Build efficient code to transform raw data into datasets for analysis, reporting, and data models.
- Collaborate with other data engineers to implement a shared technical vision.
- Participate in the entire software development lifecycle, from concept to release.
- Discuss with PO team, create JIRA tickets, and update work logs properly.
Good to Have
- Experience in any one of the ETL tools like Informatica, Talend ETL, PentahoDI.
- Conceptual knowledge on cloud-based Distributed/Hybrid data-warehousing solutions and Data Lake.
- Databricks certification.
Job Description
Location:
Bengaluru, KA
About Roles And Responsibilities: Responsibilities include, but are not limited to:
- Work with team members to enable them to fullfill the technical needs and own the responsibility of development, unit testing of data-products, analytics, and data engineering needs.
- Build efficient code to transform raw data into datasets for analysis, reporting and data models
- Collaborate with other data engineers to implement a shared technical vision
- Participate in the entire software development lifecycle, from concept to release
- Discuss with PO team, create JIRA tickets wherever necessary and update work logs properly
MINIMUM QUALIFICATIONS
- Applicants should have a degree (B.S. or higher) in Computer Science or a related discipline or relevant professional experience
- Experience in designing scalable & automated software systems, strong theoretical knowledge is mandatory.
- Proficiency in Python/PySpark coding. Data structures and algorithms using python are preferred.
- Proficiency in SQL
- Experience with data processing frameworks such as Spark, Flink, or Beam
- UnderExperience in cloud-based infrastructures such as AWS or GCP
- Exposure to orchestration platforms such as Airflow or Kubeflow
- Proven attention to detail, critical thinking, and the ability to work independently within a cross-functional team
- Basic understanding of code versioning tools such as GitHub, SVN, CVS etc.,
- Understanding of Agile framework and delivery
- Experience in any one of the ETL tool like Informatica, Talend ETL, PentahoDI would be a plus
- Conceptual knowledge on cloud based Distributed/Hybrid data-warehousing solutIons and Data Lake knowledge would be a plus
- Databricks certification is plus
What happens next?
If you are interested in this opportunity, please apply below, and we will review your application as soon as possible. You can update your resume or upload a cover letter at any time by accessing your candidate profile.
Condé Nast is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, age, familial status and other legally protected characteristics.