In this cookbook, we're going to work through a Text2SQL use case where we are starting from scratch without a nice and clean
dataset of questions, SQL queries, or expected responses. Although eval datasets are popular in academic settings, they are often
not practically available in the real world. In this case, we'll build up a dataset using some simple handwritten questions and
an LLM to generate samples based on the SQL dataset.
Along the way, we'll cover the following components of the eval process:
Before starting, please make sure that you have a Braintrust account. If you do not, please sign up.
The next few commands will install some libraries and include some helper code for the text2sql application. Feel free to copy/paste/tweak/reuse this code in your own tools.
Now that we have the basic data in place, let's implement the text2sql logic. Don't overcomplicate it at the start. We can always improve its implementation later!
import osfrom textwrap import dedentimport braintrustimport openaiclient = braintrust.wrap_openai( openai.AsyncClient( api_key=os.environ["OPENAI_API_KEY"], base_url="https://api.braintrust.dev/v1/proxy", # This is optional and allows us to cache responses ))columns = conn.query("DESCRIBE nba").to_df().to_dict(orient="records")TASK_MODEL = "gpt-4o"@braintrust.tracedasync def generate_query(input): response = await client.chat.completions.create( model=TASK_MODEL, temperature=0, messages=[ { "role": "system", "content": dedent(f"""\ You are a SQL expert, and you are given a single table named nba with the following columns: {", ".join(column["column_name"] + ": " + column["column_type"] for column in columns)} Write a SQL query corresponding to the user's request. Return just the query text, with no formatting (backticks, markdown, etc.)."""), }, { "role": "user", "content": input, }, ], ) return response.choices[0].message.contentquery = await generate_query("Who won the most games?")print(query)
SELECT Team, COUNT(*) AS WinsFROM nbaWHERE WINorLOSS = 'W'GROUP BY TeamORDER BY Wins DESCLIMIT 1;
Let's handwrite a few examples to bootstrap the dataset. It'll be a real pain, and probably brittle, to try and handwrite both questions and SQL queries/outputs. Instead,
we'll just write some questions, and try to evaluate the outputs without an expected output.
questions = [ "Which team won the most games?", "Which team won the most games in 2015?", "Who led the league in 3 point shots?", "Which team had the biggest difference in records across two consecutive years?", "What is the average number of free throws per year?",]
At this point, there's not a lot we can score, but we can at least check if the SQL query is valid. If we generate an invalid query, the error field will be non-empty.
async def no_error(output): return output["error"] is None
=========================SUMMARY=========================60.00% 'no_error' scoreSee results for Initial dataset at https://www.braintrust.dev/app/braintrustdata.com/p/LLM%20Eval%20for%20Text2SQL/experiments/Initial%20dataset
EvalResultWithSummary(...)
Ok! It looks like 3/5 of our queries are valid. Let's take a closer look in the Braintrust UI.
Now that we ran the initial eval, it looks like two of the results are valid, two produce SQL errors, and one is incorrect.
To best utilize these results:
Let's capture the good data into a dataset. Since our eval pipeline did the hard work of generating a reference query and results, we can
now save these, and make sure that future changes we make do not regress the results.
The incorrect query didn't seem to get the date format correct. That would probably be improved by showing a sample of the data to the model.
There are two binder errors, which may also have to do with not understanding the data format.
Let's start by reworking our data function to pull the golden data we're storing in Braintrust and extend it with the handwritten questions. Since
there may be some overlap, we automatically exclude any questions that are already in the dataset.
from braintrust import init_datasetdef load_data(): golden_data = init_dataset(PROJECT_NAME, "Golden data") golden_questions = set(d["input"] for d in golden_data) return list(golden_data) + [{"input": q} for q in questions if q not in golden_questions]load_data()[0]
{'id': '614006b1-a8b1-40c2-b700-3634c4fb14f5', '_xact_id': '1000193117554478505', 'created': '2024-05-29 16:23:59.989+00', 'project_id': 'b8d44d19-7999-49b0-911b-1f0aaafc5bac', 'dataset_id': 'a6c337e3-f7f7-4a96-8529-05cb172f847e', 'input': 'Which team won the most games?', 'expected': {'error': None, 'query': "SELECT Team, COUNT(*) AS Wins\nFROM nba\nWHERE WINorLOSS = 'W'\nGROUP BY Team\nORDER BY Wins DESC\nLIMIT 1;", 'results': [{'Team': 'GSW', 'Wins': 265}]}, 'metadata': {}, 'tags': [], 'span_id': '614006b1-a8b1-40c2-b700-3634c4fb14f5', 'root_span_id': '614006b1-a8b1-40c2-b700-3634c4fb14f5'}
Now, let's tweak the prompt to include a sample of each row.
samples = conn.query("SELECT * FROM nba LIMIT 1").to_df().to_dict(orient="records")[0]@braintrust.tracedasync def generate_query(input): response = await client.chat.completions.create( model=TASK_MODEL, temperature=0, messages=[ { "role": "system", "content": dedent(f"""\ You are a SQL expert, and you are given a single table named nba with the following columns: Column | Type | Example -------|------|-------- {"\n".join(f"{column['column_name']} | {column['column_type']} | {samples[column['column_name']]}" for column in columns)} Write a DuckDB SQL query corresponding to the user's request. Return just the query text, with no formatting (backticks, markdown, etc.)."""), }, { "role": "user", "content": input, }, ], ) return response.choices[0].message.contentprint(await generate_query("Which team won the most games in 2015?"))
SELECT Team, COUNT(*) AS WinsFROM nbaWHERE WINorLOSS = 'W' AND Date LIKE '%/15'GROUP BY TeamORDER BY Wins DESCLIMIT 1;
Looking much better! Finally, let's add a scoring function that compares the results, if they exist, with the expected results.
from autoevals import JSONDiff, Sqldef extract_values(results): return [list(result.values()) for result in results]def correct_result(output, expected): if expected is None or expected.get("results") is None or output.get("results") is None: return None return JSONDiff()(output=extract_values(output["results"]), expected=extract_values(expected["results"])).scoredef correct_sql(input, output, expected): if expected is None or expected.get("query") is None or output.get("query") is None: return None return Sql()(input=input, output=output["query"], expected=expected["query"]).score
Great. Let's plug these pieces together and run an eval!
Experiment With samples is running at https://www.braintrust.dev/app/braintrustdata.com/p/LLM%20Eval%20for%20Text2SQL/experiments/With%20samplesLLM Eval for Text2SQL [experiment_name=With samples] (data): 5it [00:00, 17848.10it/s]
Now that we have a basic flow in place, let's generate some data. We're going to use the dataset itself to generate expected queries, and have a model describe the queries.
This is a slightly more robust method than having it generate queries, because we'd expect a model to describe a query more accurately than generate one from scratch.
import jsonfrom pydantic import BaseModelclass Question(BaseModel): sql: str question: strclass Questions(BaseModel): questions: list[Question]logger = braintrust.init_logger("question generator")response = await client.chat.completions.create( model="gpt-4o", temperature=0, messages=[ { "role": "user", "content": dedent(f"""\ You are a SQL expert, and you are given a single table named nba with the following columns: Column | Type | Example -------|------|-------- {"\n".join(f"{column['column_name']} | {column['column_type']} | {samples[column['column_name']]}" for column in columns)} Generate SQL queries that would be interesting to ask about this table. Return the SQL query as a string, as well as the question that the query answers."""), } ], tools=[ { "type": "function", "function": { "name": "generate_questions", "description": "Generate SQL queries that would be interesting to ask about this table.", "parameters": Questions.model_json_schema(), }, } ], tool_choice={"type": "function", "function": {"name": "generate_questions"}},)generated_questions = json.loads(response.choices[0].message.tool_calls[0].function.arguments)["questions"]generated_questions[0]
{'sql': "SELECT Team, COUNT(*) as TotalGames, SUM(CASE WHEN WINorLOSS = 'W' THEN 1 ELSE 0 END) as Wins, SUM(CASE WHEN WINorLOSS = 'L' THEN 1 ELSE 0 END) as Losses FROM nba GROUP BY Team;", 'question': 'What is the total number of games played, wins, and losses for each team?'}
Query failed: SELECT Team, AVG(FieldGoals.) as AvgFieldGoalPercentage, AVG(X3PointShots.) as Avg3PointPercentage, AVG(FreeThrows.) as AvgFreeThrowPercentage FROM nba GROUP BY Team; Parser Error: syntax error at or near ")"Skipping...Query failed: SELECT Team, AVG(Opp.FieldGoals.) as AvgOppFieldGoalPercentage, AVG(Opp.3PointShots.) as AvgOpp3PointPercentage, AVG(Opp.FreeThrows.) as AvgOppFreeThrowPercentage FROM nba GROUP BY Team; Parser Error: syntax error at or near ")"Skipping...
Awesome, let's update our dataset with the new data.
def load_data(): golden_data = init_dataset(PROJECT_NAME, "Golden data") golden_questions = set(d["input"] for d in golden_data) return ( [{**x, "metadata": {"category": "Golden data"}} for x in golden_data] + [ {"input": q, "metadata": {"category": "Handwritten question"}} for q in questions if q not in golden_questions ] + [x for x in generated_dataset if x["input"] not in golden_questions] )
Experiment Generated data is running at https://www.braintrust.dev/app/braintrustdata.com/p/LLM%20Eval%20for%20Text2SQL/experiments/Generated%20dataLLM Eval for Text2SQL [experiment_name=Generated data] (data): 13it [00:00, 36916.69it/s]
=========================SUMMARY=========================Generated data compared to With samples:84.62% (-) 'no_error' score (0 improvements, 0 regressions)69.72% (-) 'correct_result' score (0 improvements, 0 regressions)63.64% (-) 'correct_sql' score (0 improvements, 0 regressions)4.23s (-155.93%) 'duration' (0 improvements, 0 regressions)See results for Generated data at https://www.braintrust.dev/app/braintrustdata.com/p/LLM%20Eval%20for%20Text2SQL/experiments/Generated%20data
EvalResultWithSummary(...)
Amazing! Now we have a rich dataset to work with and some failures to debug. From here, you could try to investigate whether some of the generated data needs improvement, or try tweaking the prompt to improve accuracy,
or maybe even something more adventurous, like feed the errors back to the model and have it iterate on a better query. Most importantly, we have a good workflow in place to iterate on both the application and dataset.
In this cookbook, we walked through the process of building a dataset for a text2sql application. We started with a few handwritten examples, and iterated on the dataset by using an LLM to generate more examples. We used the eval framework to track our progress, and iterated on the model and dataset to improve the results. Finally, we tried out a more powerful model to see if it could improve the results.