Skip to content

LLMs

You're probably not doing experiments right

I recently started working as a research engineer and it's been a significant mindset shift in how I approach my work. it's tricky to run experiments with LLMs efficiently and accurately and after months of trial and error, I've found that there are three key factors that make the biggest difference

  1. Being clear about what you're varying
  2. Investing time to build out some infrastructure
  3. Doing some simple sensitivity analysis

Let's see how each of these can make a difference in your experimental workflow.

Why Instructor might be a better bet than Langchain

Introduction

If you're building LLM applications, a common question is which framework to use: Langchain, Instructor, or something else entirely. I've found that this decision really comes down to a few critical factors to choose the right one for your application. We'll do so in three parts

  1. First we'll talk about testing and granular controls and why you should be thinking about it from the start
  2. Then we'll explain why you should be evaluating a framework's ability to experiment quickly with different models and prompts and adopt new features quickly.
  3. Finally, we'll consider why long term maintenance is also an important factor and why Instructor often provides a balanced solution, offering both simplicity and flexibility.

How to create synthetic data that works

Synthetic data can accelerate AI development, but generating high-quality datasets remains challenging. In this article, I'll walk through a few experiments I've done with synthetic data generation and the takeaways I've learnt so that you can do the same.

We'll do by covering

  1. Limitations of simple generation methods : Why simple generation methods produce homogeneous data
  2. Entropy and why it matters : Techniques to increase diversity in synthetic datasets
  3. Practical Implementations : Some simple examples of how to increase entropy and diversity to get better synthetic data

AI Engineering World Fair

What's new?

Last year, we saw a lot of interest in the use of LLMs for new use cases. This year, with more funding and interest in the space, we've finally started thinking about productionizing these models at scale and making sure that they're reliable, consistent and secure.

Let's start with a few definitions

  • Agent : This is a LLM which is provided with a few tools it can call. The agentic part of this system comes from the ability to make decisions based on some input. This is similar to Harrison Chase's article here

  • Evaluations : A set of metrics that we can look at to understand where our current system falls short. An example could be measuring precision and recall.

  • Synthethic Data Generation: Data generated by a LLM which is meant to mimic real data

Grokking LLMs

I've spent the last year working with LLMs and writing a good amount of technical content on how to use them effectively, mostly with the help of structured parsing using a framework like Instructor. Most of what I know now is self-taught and this is the guide that I wish I had when starting out.

It should take about 10-15 minutes at most to read and I've added some resources along the way that are relevant to you. If you're looking for a higher level, i suggest skimming over the first two sections and then focusing more on the application/data side of things!

I hope that after reading this essay, you walk away with an enthusiasm that these models are going to change so much things that we know today. We have models with reasoning abilities and knowledge capacities that dwarf many humans today in tasks such as Mathetical Reasoning, QnA and more.

Writing scripts that scale

Writing good scripts for machine learning is an art. I struggled with writing them for a long time because of how different it was to my experience working with full-stack frameworks such as React or FastAPI.

There were four main issues that I struggled with

  1. My job has a high probability of failing without any reason
  2. My data might not fit into memory for no reason
  3. Running a single job takes days or more
  4. Optimizing hyper-parameters is genuinely difficult

A guide to RWKV V3

Introduction

RWKV is an alternative to the transformer architecture. It's open source and has it's own paper over here. I found out about it sometime back in a paper club and thought i'd write a short article about it with what I had learnt.

Here are some other resources which you might find useful about RWKVs

  • RKWV by Picocreator This is a markdown file that was used by one of the contributors - Picocreator to give a short presentation on the RWKV architecture.

  • RKWV in 100 lines Which covers the implementation of RWKV in 100 lines of code. Much of this article is based off the content here - I try to extend and provide my own intuition for some proofs. I've also attached a colab notebook for you if you want to play with the code.

Reinventing Gandalf

Introduction

The code for the challenge can be found here

A while ago, a company called Lakera released a challenge called Gandalf on Hacker News which took the LLM community by storm. The premise was simple - get a LLM that they had built to reveal a password. This wasn't an easy task and many people spent days trying to crack it.

Some time after their challenge had been relased, they were then kind enough to release both the solution AND a rough overview of how the challenge was developed. You can check it out here. Inspired by this, I figured I'd try to reproduce it to some degree on my own in a challenge I called The Chinese Wall with Peter Mekhaeil for our annual company's coding competition. We will be releasing the code shortly.

Participants were asked to try and extract a password from a LLM that we provided. We also provided a discord bot that was trained on the challenge documentation which participants could use to ask questions to.

Here's a quick snapshot of it in action

The model uses Open AI's GPT 3.5 under the hood with the instructor library for function calls.

Introduction

As usual, you can find the code for this specific article here

If you've ever used Google Maps, you've definitely struggled to decide where to go to eat. The UI ... frankly sucks beyond belief for an application that has all the data and compute that it has.

Whispers In The Background

Introduction

I recently ran into two problems when trying to generate transcript for audio files using whisper when working with NextJS

  1. They were taking too long and the request would time out
  2. The files were too large and I couldn't send them through the request body of a api route