AI Engineer Intern

Philips

Job Summary

This AI Engineer Intern position offers the opportunity to participate in the localized deployment and performance optimization of enterprise-level large language models (LLM), learn to build enterprise knowledge bases for internal knowledge Q&A and semantic retrieval, and contribute to an intelligent data analysis platform. Responsibilities include LLM deployment, fine-tuning, API development, knowledge base construction, natural language to SQL conversion, data analysis platform core function implementation, and performance optimization, assisting with testing, documentation, and deployment of AI systems.

Must Have

  • Participate in the local deployment, fine-tuning, and API development of large language models (LLM) such as GPT, Llama, DeepSeek.
  • Responsible for building enterprise knowledge bases, including document parsing, vectorization, index construction, and retrieval optimization.
  • Participate in the design and implementation of the automatic natural language to SQL conversion module.
  • Design and implement core functions of the data analysis platform, including database connection, SQL execution, and result visualization.
  • Optimize the performance, memory footprint, and response speed of models running locally.
  • Assist the team in completing AI system testing, documentation, and deployment.
  • Bachelor's or Master's degree in Data Science, Computer Science, Information Technology, Artificial Intelligence, Software Engineering, or related fields.
  • Proficient in Python programming and common AI/data science libraries (e.g., Transformers, LangChain, Pandas, NumPy).
  • Familiar with one or more mainstream large model frameworks (e.g., Hugging Face Transformers, Ollama, vLLM, Llama.cpp).
  • Familiar with SQL syntax and database principles, able to write and debug SQL statements independently.
  • Possess strong logical analysis, learning, and problem-solving abilities.
  • Good English listening, speaking, reading, and writing skills (CET4 and above).

Good to Have

  • Experience with knowledge base or semantic retrieval.
  • Web application or data visualization project experience.

Perks & Benefits

  • Company focuses on personal development.
  • Complete training system including various product trainings.
  • Opportunity for comprehensive professional skill development.
  • Free and harmonious team atmosphere for self-improvement and leveraging strengths.

Job Description

In this position, you will have the opportunity to

  • Participate in the localized deployment and performance optimization of enterprise-level large language models (LLM);
  • Learn how to build enterprise knowledge bases to achieve internal knowledge Q&A and semantic retrieval;
  • Participate in the development of an intelligent data analysis platform: users ask questions in natural language, and the system automatically identifies database tables, generates SQL statements, and performs analysis;
  • Deeply understand core technologies such as natural language processing, semantic parsing, and SQL generation;
  • Access and practice cutting-edge technologies combining AI + data intelligence, and explore the application of generative AI in enterprise scenarios.

Your responsibilities

  • Participate in the local deployment, fine-tuning, and API development of large language models (LLM) (e.g., GPT, Llama, DeepSeek, etc.);
  • Responsible for building enterprise knowledge bases, including document parsing, vectorization, index construction, and retrieval optimization;
  • Participate in the design and implementation of the automatic natural language to SQL conversion module;
  • Design and implement core functions of the data analysis platform, including database connection, SQL execution, and result visualization;
  • Optimize the performance, memory footprint, and response speed of models running locally;
  • Assist the team in completing AI system testing, documentation, and deployment.

To be successful in this role, you should have the following skills and experience

  • Bachelor's or Master's degree in Data Science/Computer Science/Information Technology/Artificial Intelligence/Software Engineering or related majors;
  • Familiar with Python programming, proficient in common AI and data science libraries (e.g., Transformers, LangChain, Pandas, NumPy, etc.);
  • Familiar with one or more mainstream large model frameworks (e.g., Hugging Face Transformers, Ollama, vLLM, Llama.cpp, etc.);
  • Understand the principles of RAG (Retrieval-Augmented Generation) technology, experience with knowledge base or semantic retrieval is preferred;
  • Familiar with SQL syntax and database principles, able to write and debug SQL statements independently;
  • Experience with Web applications or data visualization projects is preferred;
  • Possess strong logical analysis, learning, and problem-solving abilities;
  • Good English listening, speaking, reading, and writing skills (CET4 and above).

The unique benefits of this position include

The company focuses on personal development and has a complete training system including but not limited to various product trainings, you will have the opportunity to get more comprehensive professional skill development. We encourage you to improve yourself and give full play to your strengths in a free and harmonious team atmosphere. This position is for a daily intern position.

9 Skills Required For This Role

Data Analytics Cpp Game Texts Data Visualization Data Science Numpy Pandas Python Sql

Similar Jobs