Fluence Mosaic seeks a Staff Data Engineer with 5+ years of experience in data engineering. Strong Python, SQL, and data pipeline expertise are crucial. Experience with AWS and containerization technologies is highly valued. Join a team building the future of the electricity grid!
Must have:
Python data engineering
SQL experience
AWS experience
Data pipeline
Good to have:
Terraform experience
BI tools experience
Visualization tools
Data warehouse
Perks:
Global team
Hybrid work
Not hearing back from companies?
Unlock the secrets to a successful job application and accelerate your journey to your next opportunity.
This position will be supporting the Fluence Mosaic team. Fluence Mosaic team’s software technology uses artificial intelligence, advanced price forecasting, portfolio optimization and market bidding to ensure energy storage and flexible generation assets are responding optimally to price signals sent by the market.
You will be an integral member of the Fluence Mosaic team, maturing, developing, and maintaining a data ingestion and storage platform which serves our trading, forecasting, reporting, and analytical applications.
The position will be based in Fluence’s Houston office, with close interaction with a global team of data scientists, product managers, and subject matter experts to shape the evolution of our company and the future of the electricity grid.
What does a Staff Data Engineer do at Fluence?
Build, test, scale, and refine data ingestion pipelines across our energy market data platform
Design, develop, and maintain systems for querying and processing data, working with data scientists and software engineers to drive efficient solutions
Build and own data frameworks and libraries, supporting others in deploying, operating, and extending upon your clean, tested code
Help define our data story and enable data-driven solutions at Fluence Mosaic, both technically and culturally
Work with on-call staff, data scientists, software engineers and customer teams to trouble-shoot and debug data pipelines
Help define data management strategy.
What does the ideal candidate look like?
10+ years of professional software engineering experience with at least 5+ years of data engineering experience
Excellent written and verbal communication skills
Expert in Python, with hands on experience with data engineering tools such as Pandas
Knowledgeable in at least one other programming language (C/C++, Kotlin, Java) with the ability to adapt to new environments
Expert in SQL with a strong understanding of database design, data modeling, and query tuning with hands on experience ingesting, storing and serving time-series data preferred
Hands on knowledge of PostGres
Experience building Restful services
Understanding of automated testing
Experience with containerization technologies such as Docker
Experience on setting up deployment process, and tools
Knowledge of available data streaming platforms, distributed compute frameworks, and file storage formats with the expertise to apply their use to meet our organizational needs
Experience building data pipelines with stringent latency and correctness requirements
Hands on experience working with the AWS cloud and AWS services such as S3, Aurora, RDS, EC2, Lambdas, etc.
Strong computer science fundamentals, including knowledge of data structures and algorithms
Proven ability to meet deadlines and deliver solutions quickly at high quality
Passionate about learning, and tackling new and exciting technical challenges
Nice to haves:
Working knowledge with Terraform
Experience with BI tools such as Looker, PowerBI
Experience with vizualization tools in the python ecosystem such as matplotlib, plot.ly, etc
Experience working with Data Science teams
Experience working with data warehouses such as Snowflake