About the job
SummaryBy Outscal
A Middle/Senior Data Engineer is needed for an AdTech company in São Paulo, Brazil. The role involves designing, implementing, and maintaining scalable data pipelines for large volumes of data. Experience with Spark, Databricks, and SQL is required.
We have a great opportunity for a talented and experienced Middle/Senior Data Engineer who will become part of our dynamic team in the Advertising and Media domain.
If you are ready to show a high level of engagement in making technical decisions and solving challenging problems - this opportunity is for you.
Sounds like you? We are waiting for you in our team!
PROJECT
Our client operates in the AdTech domain, specializing in developing cutting-edge applications that drive innovation in digital advertising.
As a Data Engineer, you will be involved in developing an advertising management application in the AdTech domain. Our approach empowers teams with the autonomy to solve business problems using the most effective tools and methods. We value simplicity, beauty, and cost-effectiveness in our solutions.
- Design, implement, and maintain scalable data pipelines to ingest and transform large volumes of data from various sources
- Optimize streaming and batch data processing and storage solutions for performance and scalability
- Monitor system performance, troubleshoot issues, and implement solutions to ensure high availability and reliability
- Collaborate with product managers and other stakeholders to understand data requirements and deliver data solutions that meet business needs
- Communicate technical concepts and solutions effectively to non-technical stakeholders
- Stay abreast of emerging technologies and best practices in data engineering. Identify opportunities to improve data processes, tools, and infrastructure to enhance efficiency and effectiveness
- Document data pipelines, processes, and systems to ensure clarity and maintainability
- Share knowledge and best practices with team members to foster a culture of learning and collaboration
- 4+ years of hands-on experience in the Software Development field and/or Big Data
- Solid programming skills, fluency with Java
- Familiar with big data processing Framework/tools Spark, Spark Streaming, Databricks, Flink
- Experience with Data Lakehouse technics
- Solid SQL skills with experience in writing complex queries, stored procedures, and optimizing querying performance
- Experience with data warehouse, data modeling techniques, ETL processes, and relational databases (MySQL, PostgreSQL)
- Familiarity with AWS (EC2, S3, etc.), and proficiency in managing cloud-based data solutions
- At least an Upper-Intermediate level of English