Senior PySpark Data Engineer

9 Months ago • 5 Years +

Job Description

Senior PySpark Data Engineer needed for a dynamic team in the Middle East, working on diverse projects. Responsibilities include requirements clarification, solution design, unit/integration testing, QA support, process optimization, cross-team collaboration, documentation, code reviews, and staying updated on industry trends. The role requires strong Python and PySpark expertise, experience with big data technologies (Hadoop, Spark), ETL processes, data pipelines, SQL, and data security/governance. Familiarity with cloud technologies (Azure preferred), data visualization tools, and version control (Git) is essential.
Good To Have:
  • Streaming data processing (Kafka)
  • Containerization (Docker)
  • Data modeling & evaluation
  • Model training, deployment & maintenance
  • Machine learning algorithms, NLP, Neural Networks
  • Applied mathematics proficiency
  • Financial market knowledge
Must Have:
  • 5+ years experience as Senior Data Engineer
  • Big Data Technologies (Hadoop, Spark)
  • Python & PySpark expertise
  • Advanced SQL knowledge
  • ETL & Data Pipeline experience
  • Data Security & Governance understanding
  • API integration & version control (Git)
  • Cloud technology experience (Azure preferred)

Add these skills to join the top 1% applicants for this job

python
azure
data-visualization
sql
storytelling
confluence
neural-networks
jira
algorithms
back-end
github
spark
linear-algebra
front-end
docker

Project description

Join our dynamic team working on exciting projects in the thriving Middle East region. We offer a multitude of opportunities in various domains. Our diverse team comprises skilled professionals, including front-end and back-end developers, data analysts, data scientists, architects, analysts, and project managers. Currently, we are actively seeking a talented Data Engineer with proficiency in Python programming.

Responsibilities

Actively engage in requirements clarification and contribute to sprint planning sessions.

Design and architect technical solutions that align with project objectives.

Develop comprehensive unit and integration tests to ensure the robustness and reliability of the codebase.

Provide valuable support to QA teammates during the acceptance process, addressing and resolving issues promptly.

Continuously assess and refine best practices to optimize development processes and code quality.

Collaborate with cross-functional teams to ensure seamless integration of components and efficient project delivery.

Stay abreast of industry trends, emerging technologies, and best practices to contribute to ongoing process improvement initiatives.

Contribute to documentation efforts, ensuring clear and comprehensive records of technical solutions and best practices.

Actively participate in code reviews, providing constructive feedback and facilitating knowledge sharing within the team.

Skills

Must have

Technical skills:

5+ years of relevant experience in a Senior Data Engineer role

Big Data Technologies: Familiarity with big data technologies such as Hadoop, Apache Spark, or other distributed computing frameworks.

Data Security and Governance: Comprehensive understanding of data security principles and practices to ensure the confidentiality and integrity of sensitive information, coupled with knowledge of data governance frameworks and practices for ensuring data quality, compliance, and proper data management.

Python and PySpark: Demonstrated strong expertise in both Python and PySpark for efficient data processing and analytics.

Advanced SQL Knowledge: Proficient in SQL with the ability to handle complex queries and database operations.

ETL Experience: Prior experience working with Extract, Transform, Load (ETL) processes.

Data Pipelines: Familiarity with data cleansing, data profiling, data lineage, and adherence to best practices in data engineering.

Familiarity with Data Analysis Approaches: Some experience with various data analysis methodologies.

Python Libraries: Familiarity with building libraries in Python for enhanced functionality.

API Integration: Knowledge of integrating data pipelines with various APIs for seamless data exchange between systems.

Version Control: Proficiency in version control systems, such as Git, for tracking changes in code and collaborative development.

Cloud Technology Experience: Prior exposure to cloud technologies, particularly Azure or any leading cloud platform.

Data Visualization: Some exposure to data visualization tools like Tableau, Power BI, or others to create meaningful insights from data.

Collaboration Tools: Familiarity with collaboration tools such as Azure DevOps, Jira, Confluence, or others to enhance teamwork and project documentation.

Educational Background: A degree in computer science, mathematics, statistics, or a related technical discipline.

Financial Markets Knowledge: Familiarity with financial markets, portfolio theory, and risk management is a plus.

Non-technical skills:

Problem-Solving: Strong problem-solving skills to tackle complex data engineering challenges.

Data Storytelling: Ability to convey insights effectively through compelling data storytelling.

Quality Focus: Keen attention to delivering high-quality solutions within specified timelines.

Team Collaboration: Proven ability to work collaboratively within a team, taking a proactive approach to problem resolution and process improvement.

Communication Skills: Excellent communication skills to articulate technical concepts clearly and concisely.

Nice to have

Streaming Data Processing: Exposure to streaming data processing technologies like Apache Kafka for real-time data ingestion and processing.

Containerization: Knowledge of containerization technologies like Docker for creating, deploying, and running applications consistently across various environments.

Data Modeling and Evaluation: Extensive experience in data modeling and the evaluation of large datasets.

Model Training, Deployment, and Maintenance: Background in training, deploying, and maintaining models for effective data-driven decision-making.

Requirements for Machine Learning: Experience in developing and implementing machine learning algorithms, Natural Language Processing (NLP), and Neural Networks.

Applied Mathematics: Proficiency in applied mathematics, including but not limited to linear algebra, probability, statistics, and distributions.

Other

Languages

English: C1 Advanced

Seniority

Senior

Set alerts for new jobs by Luxoft
Set alerts for new jobs in Czechia
Contact Us
hello@outscal.com
Made in INDIA 💛💙