Scope:
The Execution Machine Learning team works closely with sales, product, and engineering teams to design and implement the next generation of retail solutions. Data Science team members are tasked with turning both small, sparse, and massive data into actionable insights with measurable improvements to the customer bottom line. They use rigorous analysis and repeatable processes to implement both black box and interpretable models.
Our machine learning platform ingests data in real time, processes information from millions of retail items to serve deep learning models, and produces billions of predictions on a daily basis
What you’ll do:
- Design, architect, implement and help operate the Execution Machine Learning platform by
- Observing inefficiencies, both in cost and reliability, of existing processes
- Researching alternative solutions using custom or existing open-source technologies
- Designing replacement processes and components
- Implementing processes, extending, and configuring open-source components
- Work with the DevOps and Support teams to operate platform by
- Helping implement DevOps best practices of in-house and open-source components.
- Ensuring smooth operation via monitoring and alerting facilities
- Work with the data scientists to
- Design scalable solutions for both model building and serving.
What we are looking for:
- Bachelor’s degree in computer science is required, Masters is preferred
- 7+ years of software engineering experience building production software.
- Experience in Frontend technologies, JavaScript, TypeScript, React
- Good working knowledge of Kubernetes and other virtualized execution technologies
- 4+ years of experience working on at least one cloud environment, GCP preferred.
- 7+ years of Python programming experience with excellent understanding of Object-Oriented Design & Patterns
- 5+ years of experience in building REST APIs
- 1+ Working Experience on Kafka and its integration with Cloud Services.
- 2+ years of Linux scripting experience
- 1+ years working with traditional and new relational SQL DBMS; Hive and Big Query preferred.
- 1+ years of experience with NOSQL databases; Cassandra, Hbase, Redis
- Strong CS fundamentals in algorithms and data structures
- Should have experience working with CI/CD, automated unit, and integration testing.
- Some experience with streaming frameworks, preferably Beam on Samza/Flink/DataFlow
- Familiarity with modern Big Data computing platforms such as Hadoop and Spark
- Exposure to one or more of: Pandas, NumPy, sklearn, Keras, TensorFlow, Jupyter, Matplotlib etc.
- Good to have Supply Chain domain knowledge.
Our Values
If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values
Diversity, Inclusion, Value & Equality (DIVE) is our strategy for fostering an inclusive environment we can be proud of. Check out Blue Yonder's inaugural Diversity Report which outlines our commitment to change, and our video celebrating the differences in all of us in the words of some of our associates from around the world.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.