Attendees: @Trinley, @Tenzin_Gayche, @billingsmoore
Agenda: Collaboration on a MT evaluation project for 84000
Key Discussion Points
-
OpenPecha will support the translation evaluation effort by providing the Pecha AI Studio platform for evaluators.
-
The project will proceed in phases:
- Train and guide evaluators to ensure standardized annotation.
- Compare human evaluations with existing automatic metrics to assess correlation.
- If current metrics underperform, explore training a new model to build a state-of-the-art automatic evaluator.
-
Proposed roles:
- NT will join the Translation & Evaluation team.
- Gayche will be part of the Tech team.
- Both will collaborate across technical and evaluation tasks.
Action Items
- @Trinley & @Tenzin_Gayche: Complete setup of Pecha AI Studio for evaluators to annotate translations
Decisions Made
- Decision: OpenPecha will provide infrastructure and platform support for the evaluation effort.
- Decision: Project will move forward in a phased approach, starting with evaluator training and standardization.
- Decision: Teams and roles