Mathematician - Freelance AI Trainer
As an AI Tutor in Mathematics on the Mindrift platform, you will generate prompts that challenge AI, define comprehensive scoring criteria to evaluate the accuracy of AI's answers, and correct the model's responses based on your domain-specific knowledge. You will collaborate on projects to improve GenAI models to address specialized questions and achieve complex reasoning skills. Work involves creating training prompts and refining model responses to help shape the future of AI.
Freelance Software Developer (Java) - Quality Assurance (AI Trainer)
As an AI Tutor in Coding, responsibilities include code generation and code review, prompt evaluation and complex data annotation, training and evaluation of large language models, benchmarking and agent-based code execution in sandboxed environments, working across multiple programming languages, adapting guidelines for new domains and use cases, following project-specific rubrics and requirements, and collaborating with project leads, solution engineers, and supply managers on complex or experimental projects. Flexibility and quick adaptation to new requirements are essential.
AI Engineer
Design, build, and maintain intelligent, scalable AI systems that directly enhance product functionality and user experience. Build, train, and validate machine learning models (classification, forecasting, recommendation systems) using real-world datasets. Develop and maintain MLOps pipelines, integrating CI/CD, model monitoring, model versioning, and scalability best practices. Rapidly prototype AI features and deploy production-ready solutions. Conduct hyperparameter tuning and optimize models for accuracy, latency, throughput, and resource efficiency. Work with product and engineering teams to translate business needs into practical, high-impact AI solutions. Document methodologies and communicate results clearly. Ensure fairness, interpretability, and regulatory compliance. Implement monitoring to detect model drift, bias, and performance degradation.
First-Line Supervisors of Food Preparation and Serving Workers - AI Trainer (Contract)
The responsibilities include evaluating what AI models produce related to the field of food preparation and serving work, assessing content related to the field of work, delivering clear and structured feedback to improve the AI model's understanding of workplace tasks and language, developing prompts for AI models that reflect the field, and evaluating AI responses. The work is performed remotely and asynchronously with flexible hours, and involves leveraging professional experience in food preparation and serving supervision to train AI models.
Member of Technical Staff - ML Research Engineer; Multi-Modal - Audio
Invent and prototype new model architectures that optimize inference speed, including on edge devices; build and maintain evaluation suites for multimodal performance across a range of public and internal tasks; collaborate with the data and infrastructure teams to build scalable pipelines for ingesting and preprocessing large audio datasets; work with the infrastructure team to optimize model training across large-scale GPU clusters; contribute to publications, internal research documents, and thought leadership within the team and the broader ML community; collaborate with the applied research and business teams on client-specific use cases.
Software Engineer, Evaluation Frontend
As an Evaluation Frontend Software Engineer, you will design tools and visualizations that enable researchers and engineers to compare and analyse hundreds of model evaluations, including both data visualization tools and statistical tools to extract signal from noisy data. You will develop an understanding of the relative merits and limitations of each model evaluation and suggest new facets of model evaluation. Your work will involve collaborating closely with cross-functional teams, including researchers and engineers, to surface necessary insights for model development.
Freelance Cybersecurity Analyst - AI Trainer
Analyze and investigate simulated security alerts and incidents across endpoints, identities, and cloud environments. Conduct proactive threat hunting using KQL or similar query languages to identify hidden vulnerabilities and emerging threats that automated systems may miss. Assess the accuracy and depth of AI-generated security incident reports and threat analyses. Review, validate, and improve the model’s understanding of Microsoft Defender products and SOC workflows. Provide expert feedback on AI performance in identifying and classifying cybersecurity threats.
Finance Platform Engineer
Use proprietary software applications to provide input and labels on defined projects. Support and ensure the delivery of high-quality curated data. Contribute to the training of new tasks by working closely with the technical staff to develop and implement cutting-edge initiatives and technologies. Interact with technical staff to improve the design of efficient annotation tools. Choose problems from economics fields that align with expertise, focusing on macroeconomics, microeconomics, and behavioral economics. Regularly interpret, analyze, and execute tasks based on given instructions. Provide services including labeling and annotating data in text, voice, and video formats to support AI model training, sometimes involving recording audio or video sessions.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.