As a Data Engineer within the CCIB FM technology team, you will be part of a strategic initiative to build an advanced data platform that powers analytics, real-time insights, and intelligent decision-making across the full lifecycle of financial markets operations. This platform will unify multiple sources of structured and unstructured data into a scalable, secure, and performant environment, supporting both operational and strategic business needs.
The project focuses on enabling smart automation and leveraging cloud-native and open-source technologies. The role requires a hands-on engineer with strong expertise in data modelling, pipeline development, and system performance optimization—particularly with PostgreSQL as a core data store. You'll work closely with cross-functional teams including product owners, data scientists and business stakeholders to deliver impactful data solutions that drive value across global markets.
Design, implement and maintain robust data pipelines and data architectures using PostgreSQL and other technologies.
Collaborate with business and technical teams to gather requirements and translate them into efficient data models and technical designs.
Optimize complex SQL queries, indexes, and database schemas for high performance and scalability.
Ensure data integrity, consistency, and availability through rigorous data validation and monitoring.
Enable advanced analytics and machine learning use cases by preparing clean, reliable, and structured datasets.
Support L3 production incidents related to data pipelines or PostgreSQL performance.
Stay abreast of the latest trends and best practices in data engineering and bring innovation to the team.
Must have
8+ years of experience in data engineering, with strong hands-on experience in SQL and PostgreSQL.
Deep understanding of relational databases, query optimization, indexing, partitioning, and performance tuning.
Experience with other data technologies such as MongoDB, ClickHouse, Oracle, and vector databases (e.g., PGVector, FAISS).
Proficient in Python and/or Java for ETL, scripting, and data manipulation.
Familiarity with distributed data processing frameworks such as Apache Spark, Flink, or Storm.
Experience working with data pipeline orchestration tools like Apache Airflow, Kafka, NiFi, or Airbyte.
Understanding of ETL/ELT concepts, data warehousing, and data governance practices.
Exposure to cloud data platforms (AWS Redshift, GCP BigQuery, Azure Data Services, Snowflake) is a plus.
Familiarity with metadata stores (e.g., Informatica) and BI tools (e.g., Tableau, Power BI, Superset).
Strong version control (Git) and CI/CD practices in a collaborative, agile environment.
Excellent communication and problem-solving skills, with the ability to work cross-functionally.
Nice to have
Banking experience
Languages
English: C1 Advanced
Seniority
Senior
Get notified when new jobs are added by luxsoft