AI Research Engineer (Model Evaluation - 100 remote Argentina)

Jobs
Tether Operations Limited

Tether Operations Limited

-

Posted on: 17 June, 2025

AI Research Engineer (Model Evaluation - 100 remote Argentina)

Shape the Future of Digital Finance

At Tether Operations Limited, we’re pioneering a global financial revolution by harnessing the power of blockchain technology. Our cutting-edge solutions empower businesses to seamlessly integrate reserve-backed tokens across blockchains.

Innovate with Tether

Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide. Pioneering digital asset tokenization services.

Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art facilities.

Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, reducing infrastructure costs and enhancing global communications with cutting-edge solutions.

Tether Education: Democratizing access to top-tier digital learning, empowering individuals to thrive in the digital and gig economies.

About the Job

We are seeking an experienced professional to drive innovation across the entire AI lifecycle by developing and implementing rigorous evaluation frameworks and benchmark methodologies for pre-training, post-training, and inference.

Responsibilities:

  • Develop integrated frameworks that rigorously assess models during pre-training, post-training, and inference. Define key performance indicators such as accuracy, loss metrics, latency, throughput, and memory footprint.
  • Curate high-quality evaluation datasets and design standardized benchmarks to measure model quality and robustness. Ensure consistency in evaluation practices.
  • Engage with product management, engineering, data science, and operations teams to align evaluation metrics with business objectives.
  • Analyze evaluation data to identify and resolve bottlenecks across the model lifecycle. Propose optimizations to enhance model performance, scalability, and resource utilization on resource-constrained platforms.
  • Conduct iterative experiments and empirical research to refine evaluation methodologies and improve overall model reliability.
  • A degree in Computer Science or related field. PhD in NLP, Machine Learning, or a related field is preferred.
  • Demonstrated experience in designing and evaluating AI models at multiple stages from pre-training, post-training, and inference.
  • Strong programming skills and hands-on expertise in evaluation benchmarks and frameworks.
  • Proven ability to conduct iterative experiments and empirical research that drive the continuous refinement of evaluation methodologies.
  • Demonstrated experience collaborating with diverse teams to align evaluation strategies with organizational goals.

Tags:
ai
ml
Share the job:

Related Jobs