Senior Software Engineer, Data Backend

2 Hours ago • 2 Years +

Job Summary

Job Description

As a Senior Software Engineer, Data Backend, you will be involved in helping to build critical components of Appier's data platform, using deep learning and machine learning technologies. Your responsibilities include designing, developing, and maintaining data pipelines; building, managing, and optimizing data platforms (e.g., Spark, Kafka); profiling and tuning performance; providing solutions to enhance big data systems; and establishing architecture for platforms. You will work in a dynamic environment and contribute to transforming the world through AI.
Must have:
  • BS/MS in Computer Science
  • 2+ years building large-scale systems
  • Experience in Java/Scala development
  • Experience with Apache Spark
  • Experience in managing data lake
  • Expertise in data structures and algorithms
  • Ability to work independently
  • Self-motivated learner and builder
Good to have:
  • Experience in Golang/Python
  • Experience in JVM performance optimization
  • Experience with data platforms (Hadoop, Kafka, Flink, Trino/ClickHouse)
  • Experience with cloud services (AWS, GCP, Azure)
  • Experience in open source projects
  • Experience with open table formats

Job Details

About Appier 

Appier is a software-as-a-service (SaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe and U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit www.appier.com for more information.

 

About the role

Appier’s solutions are powered by proprietary deep learning and machine learning technologies to empower every business to use AI to turn data into business insights and decisions. As a Software Engineer, Data Backend, you will be involved in helping to build critical components of this platform.

 

Responsibilities

  • Design, develop, and maintain data pipelines
  • Build, manage, and optimize data platforms (e.g., Spark clusters, Kafka clusters)
  • Profile and tune performance of critical components
  • Provide expert advice and solutions to enhance the performance of big data systems and applications
  • Establish and improve the foundational architecture for platforms, and propose solutions to streamline software development, monitoring, etc.

 

About you

[Minimum qualifications]

  • BS/MS degree in Computer Science
  • 2+ years of experience in building and operating large-scale distributed systems or applications
  • Experience in developing Java/Scala project
  • Experience in building data pipeline using Apache Spark 
  • Experience in managing data lake or data warehouse
  • Expertise in developing data structures, algorithms on top of Big Data platforms
  • Ability to operate effectively and independently in a dynamic, fluid environment
  • Eagerness to change the world in a huge way by being a self-motivated learner and builder

[Preferred qualifications]

  • Experience in developing Golang/Python project
  • Experience in profiling and optimizing JVM performance
  • Experience in managing data platforms, such as Hadoop, Kafka, Flink, Trino/ClickHouse, etc.
  • Experience in cloud service (AWS, GCP, Azure)
  • Experience in contributing to open source projects (please provide your github link)
  • Experience in open table formats (Apache Iceberg, Delta Lake, Hudi)

 

#LI-AK1

 

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Taipei City, Taiwan

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Category Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!