Senior Data Engineer
UXBERT Labs
Job Summary
UXBERT Labs is seeking a Senior Data Engineer to design and implement robust data pipelines for processing large datasets. Responsibilities include building and optimizing data lakes, warehouses, and real-time streaming platforms; collaborating with data analysts and scientists; developing and managing data integrations; ensuring data integrity, accuracy, and consistency; troubleshooting and resolving data issues; implementing data governance policies; and optimizing data processing for performance, scalability, and cost-efficiency. The ideal candidate will have 4+ years of experience in data engineering, proficiency in Python, Java, or Scala, strong experience with big data tools (Hadoop, Spark, Kafka), and hands-on experience with cloud platforms (AWS, Azure, GCP). Experience with SQL and NoSQL databases, data warehouse design, containerization tools, and orchestration frameworks is also required.
Must Have
- 4+ years Data Engineering experience
- Proficiency in Python, Java, or Scala
- Big data tools (Hadoop, Spark, Kafka)
- Cloud platforms (AWS, Azure, GCP)
- SQL & NoSQL database expertise
- Data warehouse design (Redshift, BigQuery, Snowflake)
Good to Have
- Arabic language skills
- Experience with Docker and Kubernetes
Job Description
π About Us:
At UXBERT Labs, we believe in the power of data pipelines to drive innovation π‘. Our engineers create data architectures that process, store, and unlock insights across our platforms.
π― Role Overview:
Weβre hiring a Senior Data Engineer to design and implement robust data pipelines. If you thrive on structuring large datasets and optimizing performance, this is your opportunity to lead in data engineering. π οΈ
βοΈ Key Responsibilities:
- Design, develop, and maintain scalable ETL pipelines for processing large datasets.
- Build and optimize data lakes, warehouses, and real-time streaming platforms.
- Collaborate with data analysts and scientists to ensure smooth data flow and accessibility.
- Develop and manage data integrations between multiple platforms and third-party services.
- Ensure data integrity, accuracy, and consistency across all storage systems.
- Troubleshoot and resolve data-related issues, ensuring minimal downtime.
- Implement data governance policies and ensure compliance with data security standards.
- Optimize data processing for performance, scalability, and cost-efficiency.
π Requirements:
- 4+ years of experience as a Data Engineer, preferably in large-scale data environments.
- Proficiency in Python, Java, or Scala for data engineering.
- Strong experience with big data tools (Hadoop, Spark, Kafka).
- Hands-on experience with cloud platforms (AWS, Azure, GCP) and data services.
- Expertise in SQL and NoSQL databases.
- Experience in designing and maintaining data warehouses (Redshift, BigQuery, Snowflake).
- Familiarity with containerization tools (Docker, Kubernetes) and orchestration frameworks.
- Strong problem-solving skills and the ability to work independently on complex projects.
- Arabic language skills are a plus.
β¨ Fun Gesture:
Your pipelines could fuel the next game-changing insight. Letβs build the data systems that drive innovation! π