AI Agent Evaluation Analyst - AI Trainer
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
Who we're looking for : We're looking for curious and intellectually proactive contributors, the kind of person who double-checks assumptions and plays devil's advocate.
Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated?
This is a flexible, project-based opportunity well-suited for :
- Analysts, researchers, or consultants with strong critical thinking skills
- Students (senior undergrads / grad students) looking for an intellectually interesting gig
- People open to a part-time and non-permanent opportunity
About the project : We're on the hunt for QAs for autonomous AI agents for a new project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks.
Throughout the project, you'll have to balance quality assurance, research, and logical problem-solving. This project opportunity is ideal for people who enjoy looking at systems holistically and thinking through scenarios, implications, and edge cases.
You do not need a coding background, but you must be curious, intellectually rigorous, and capable of evaluating the soundness and consistency of complex setups.
What you'll be doing :
Reviewing evaluation tasks and scenarios for logic, completeness, and realismIdentifying inconsistencies, missing assumptions, or unclear decision pointsHelping define clear expected behaviors (gold standards) for AI agentsAnnotating cause-effect relationships, reasoning paths, and plausible alternativesThinking through complex systems and policies as a human would to ensure agents are tested properlyWorking closely with QA, writers, or developers to suggest refinements or edge case coverageRequirements
Excellent analytical thinking : Can reason about complex systems, scenarios, and logical implicationsStrong attention to detail : Can spot contradictions, ambiguities, and vague requirementsFamiliarity with structured data formats : Can read, not necessarily write JSON / YAMLCan assess scenarios holistically : What's missing, what's unrealistic, what might break?Good communication and clear writing (in English) to document your findings.We also value applicants who have :
Experience with policy evaluation, logic puzzles, case studies, or structured scenario designBackground in consulting, academia, olympiads (e.g. logic / math / informatics), or researchExposure to LLMs, prompt engineering, or AI-generated contentFamiliarity with QA or test-case thinking (edge cases, failure modes, 'what could go wrong')Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.)Benefits
Get paid for your expertise, with rates that can go up to $38 / hour depending on your skills, experience, and project needsTake part in a flexible, remote, freelance project that fits around your primary professional or academic commitmentsParticipate in an advanced AI project and gain valuable experience to enhance your portfolioInfluence how future AI models understand and communicate in your field of expertiseMindrifft is an equal opportunities employer and welcomes applications from diverse candidates.
#J-18808-Ljbffr