COMPUTER SCIENCE
Work description
- develop the requirements specification for a software module that enables the use of large language models (LLM) - fine-tuning pre-trained large language models (LLMs) - Smart assitants using Retrieval-Augmented Generation (RAG) - containerization and availability of trained models; - implementing the software module; - write the grant activity report
Academic Qualifications
- Degree in Artificial Intelligence and Data Science or similar - student/attendee on a Master's Degree in Informatics Engineer and Computation or similar;
Minimum profile required
- experience in the use of Large Language Models, from a development point of view- proven experience in fine-tuning large-scale language models.- proven experience in software development using the Python programming language.- experience in remote environment development and version control tools and methodologies.- knowledge in applying Retrieval-Augmented Generation (RAG) techniques.- Experience in developing and fine-tuning Transformer architectures and multi-modal models (e.g., CLIP).
Preference factors
- experience incorporating large language models such as Llama and Gemini into software development pipelines; - experience in REST API development. - experience with the Xtuner fine-tuning tool and llamafactory. - experience in quantitative and qualitative evaluation methodologies for large language models. - experience in containerization technologies. - good performance in programming courses and the development of artificial intelligence algorithms
Application Period
Since 07 May 2026 to 20 May 2026
Centre
Human-Centered Computing and Information Science