Skip to content

2024

Getting Started with Language Models in 2025

After a year of building AI applications and contributing to projects like Instructor, I've found that getting started with language models is simpler than most people think. You don't need a deep learning background or months of preparation - just a practical approach to learning and building.

Here are three effective ways to get started (and you can pursue all of them at once):

  1. Daily Usage: Put Claude, ChatGPT, or other LLMs to work in your daily tasks. Use them for debugging, code reviews, planning - anything. This gives you immediate value while building intuition for what these models can and can't do well.

  2. Focusing on Implementation: Start with Instructor and basic APIs. Build something simple that solves a real problem, even if it's just a classifier or text analyzer. The goal is getting hands-on experience with development patterns that actually work in production.

  3. Understand the Tech: Write basic evaluations for your specific use cases. Generate synthetic data to test edge cases. Read papers that explain the behaviors you're seeing in practice. This deeper understanding makes you better at both using and building with these tools.

You should and will be able to do all of these at once. Remember that the goal isn't expertise but to discover which aspect of the space you're most interested in.

There's a tremendous amount of possible directions to work on - dataset curation, model architecture, hardware optimisation, etc and other exiciting directions such as Post Transformer Architectures and Multimodal Models that are happening all at the same time.

What Happened in 2024

2024 has been a year of remarkable transformation. Just two and a half years out of college, I went from feeling uncertain about my path in software engineering to finding my stride in machine learning engineering. It's been a journey of pushing boundaries – improving my health, contributing to open source, and diving deeper into research.

The year has felt like a constant acceleration, especially in the last six months, where everything from technical growth to personal development seemed to shift into high gear.

Four achievements stand out from this transformative year:

  • Helped grow instructor from ~300k downloads to 1.1M downloads this year as core contributor
  • Quit my job as a swe and started working full time with llms
  • Got into better shape, lost about 6kg total and total cholesterol dropped by 32% w lifestyle changes
  • Delivered a total of 4 technical talks this year for the first time

A Weekend of Text to Image/Video Models

You can find the code for this post here.

I had a lot of fun playing around with text to image models over the weekend and thought I'd write a short blog post about some of the things I learnt. I ran all of this on Modal and spent ~10 USD across the entire weekend which is honestly well below the Modal $20 free tier credit.

This was mainly for a small project i've been working on called CYOA where users get to create their own stories and have a language model automatically generate images and choices for each of them.

Simplify your LLM Evals

Although many tasks require subjective evaluations, I've found that starting with simple binary metrics can get you surprisingly far. In this article, I'll share a recent case study of extracting questions from transcripts. We'll walk through a practical process for converting subjective evaluations to measurable metrics:

  1. Using synthetic data for rapid iteration - instead of waiting minutes per test, we'll see how to iterate in seconds
  2. Converting subjective judgments to binary choices - transforming "is this a good question?" into "did we find the right transcript chunks?"
  3. Iteratively improving prompts with fast feedback - using clear metrics to systematically enhance performance

By the end of this article, you'll have concrete techniques for making subjective tasks measurable and iterating quickly on LLM applications.

Is there any value in a wrapper?

I'm writing this as I take a train from Kaohsiung to Taipei, contemplating a question that frequently surfaces in AI discussions: Could anyone clone an LLM application if they had access to all its prompts?

In this article, I'll challenge this perspective by examining how the true value of LLM applications extends far beyond just a simple set of prompts.

We'll explore three critical areas that create sustainable competitive advantages

  1. Robust infrastructure
  2. Thoughtful user experience design
  3. Compounding value of user-generated data.

What it means to look at your data

People always talk about looking at your data but what does it actually mean in practice?

In this post, I'll walk you through a short example. After examining failure patterns, we discovered our query understanding was aggressively filtering out relevant items. We improved the recall of a filtering system that I was working on from 0.86 to 1 by working on prompting the model to be more flexible with its filters.

There really are two things that make debugging these issues much easier

  1. A clear objective metric to optimise for - in this case, I was looking at recall ( whether or not the relevant item was present in the top k results )
  2. A easy way to look at the data - I like using braintrust but you can use whatever you want.

Ultimately debugging these systems is all about asking intelligent questions and systematically hunting for failure modes. By the end of the post, you'll have a better idea of how to think about data debugging as an iterative process.

Taming Your LLM Application

This is an article that sums up a talk I'm giving in Kaoshiung at the Taiwan Hackerhouse Meetup on Dec 9th. If you're interested in attending, you can sign up here

When building LLM applications, teams often jump straight to complex evaluations - using tools like RAGAS or another LLM as a judge. While these sophisticated approaches have their place, I've found that starting with simple, measurable metrics leads to more reliable systems that improve steadily over time.

Five levels of LLM Applications

I think there are five levels that teams seem to progress through as they build more reliable language model applications.

  1. Structured Outputs - Move from raw text to validated data structures
  2. Prioritizing Iteration - Using cheap metrics like recall/mrr to ensure you're nailing down the basics
  3. Fuzzing - Using synthetic data to systmetically test for edge cases
  4. Segmentation - Understanding the weak points of your model
  5. LLM Judges - Using LLM as a judge to evaluate subjective aspects

Let's explore each level in more detail and see how they fit into a progression. We'll use instructor in these examples since that's what I'm most familiar with, but the concepts can be applied to other tools as well.

Are your eval improvements just pure chance?

A step that's often missed when benchmarking retrieval methods is determining if any performance difference is due to random chance. Without this crucial step, you might invest in a new system upgrade that's outperformed your old one by pure chance.

If you're comparing retrieval methods, you'll often want to know if the improvements you're seeing are due to random chance.

In this article, we'll use a simple case study to demonstrate how to answer this question, introducing a new library called indomee (a playful nod to both "in-domain" evaluation and the beloved instant noodle brand in Southeast Asia) that makes this analysis significantly easier.

We'll do so in three steps:

  1. First we'll simulate some fake data using numpy
  2. Then we'll demonstrate how to do bootstrapping using nothing but numpy before visualising the results with matplotlib
  3. Finally we'll perform a paired t-test to determine if the differences are statistically significant

What Makes Good Documentation

Over the past year, we've grown instructor's documentation to over 60,000 lines of content. This means for every line of code in our library, we've written 5 lines of documentation. Through this process, I've realized that great documentation isn't just about explaining features - it's about demonstrating value.

Why User Intent matters the most for Synthetic Data

Introduction

I've generated millions of tokens worth of synthetic data over the last few weeks, and I've learned something surprising: everyone talks about using different personas or complex question structures when creating synthetic data, but they're missing what really matters.

The most important thing is actually understanding why users are asking their questions in the first place - their intent.

Let's explore this concept using Peek, an AI personal finance bot, as our case study.

By examining how synthetic data generation evolves from basic documentation-based approaches to intent-driven synthesis, we'll see why focusing on user intent produces more valuable training data.