The Senior Data Engineer designs, builds, and maintains scalable data infrastructure to support analytics, machine learning, and enterprise decision-making. This role leads the development of big data frameworks, translates complex requirements into robust architectures, and delivers high-performing, reliable data systems.
Key responsibilities include implementing data solutions, defining system requirements, and establishing integration standards and processes.
The ideal candidate brings deep SQL expertise, strong engineering discipline, and proven collaboration skills—partnering across teams, mentoring engineers, and influencing the organization’s technical roadmap.
*THIS ROLE REQUIRES 3 DAYS A WEEK IN OUR QUEBEC CITY OFFICE*
Major Responsibilities:
- Design, build, and maintain scalable data pipelines (batch & streaming) for analytics, reporting, and ML.
- Architect applications and automated tools, translating complex requirements into high-performing solutions.
- Define data/software solutions, including hardware needs, to ensure performance and scalability.
- Establish and enforce standards for data integration, modeling, and schema design (dimensional, star, snowflake).
- Optimize SQL queries and schema performance; ensure data quality, consistency, and validation.
- Monitor, troubleshoot, and tune pipelines, databases, and workloads.
- Implement engineering best practices: version control, CI/CD, testing, documentation, and code reviews.
- Collaborate with engineers to integrate data systems into production.
- Mentor junior engineers and provide cross-team technical support.
- Evaluate and recommend new tools, frameworks, and technologies.
- Ensure compliance with data security, governance, access control, and regulations (GDPR, CCPA).
Education and Experience:
- 5+ years in data, software, or data science engineering
- Expert in SQL (optimization, joins, windowing, partitioning, indexing)
- Proven Snowflake expertise in warehousing and analytics
- Strong knowledge of data modeling, engineering best practices, and distributed systems (Spark, Hadoop, Hive, Presto)
- Hands-on experience with ETL/ELT pipelines, API/event/log integration, and orchestration tools (Airflow/Astronomer required)
- Proficient in DBT for data transformation and modeling
- Skilled with AWS data engineering and infrastructure services
- Strong software engineering fundamentals: clean code, modularization, CI/CD, automated testing
- Solid understanding of object-oriented design, data structures, algorithms, and disaster recovery
- Expertise in scaling and optimizing pipelines, databases, and workloads
- Proficient in Python (preferred) and other modern languages
- Analytical, collaborative, and experienced in agile environments
- Creative, detail-oriented, and results-driven with strong problem-solving skills
Lightcast is a global leader in labor market insights with offices in Moscow, ID (US), the United Kingdom, Europe, and India. We work with partners across six continents to help drive economic prosperity and mobility by providing the insights needed to build and develop our people, our institutions and companies, and our communities.
Lightcast is proud to be an equal opportunity workplace. We consider all qualified applicants without regard to race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Lightcast has always been, and always will be, committed to our diversity of thought and unique perspectives. We seek dynamic professionals from all backgrounds to join our teams, and we encourage our employees to bring their authentic, original, and best selves to work.