Description
Participants will work as evaluators to review and refine rubric items used to assess the quality of Generative AI responses. Evaluators will be given prompts and asked to define criteria (rubrics) that determine whether AI-generated answers meet high-quality standards.
Purpose
To support the training and fine-tuning Large Language Models (LLMs) by improving the accuracy and effectiveness of evaluation rubrics.
Main requirements
- Native speaker of the required language
- Fluent in English
- Strong analytical and creative thinking skills
- Excellent communication and collaboration abilities
- Familiarity with Generative AI systems
- Excluded countries: Applicants cannot be located in Afghanistan, Argentina, Bolivia, Brazil, Chile, China, Colombia, Cuba, Ecuador, Iran, Iraq, Mexico, North Korea, Panama, Russia, Sudan, Syria, Ukraine (Crimea, Luhansk, Donetsk), United Kingdom, Venezuela, or Yemen
Native speaker needed
Armenian, Bosnian, Cebuano, Farsi (Persian), Galician, Georgian, Haitian Creole, Hausa, Icelandic, Irish, Kazakh, Khmer, Malagasy, Maltese, Mongolian, Pashto, Scottish Gaelic, Shona, Sindhi, Somali, Uzbek, Welsh, Xhosa.
Benefits
- Worldwide
- Ongoing
- Fixed rate per approved asset