In the previous article I covered the fundamentals of the Chat Completions API: setting up a client, maintaining conversation history, and integrating tools. That was enough to build a working conversational agent. This article goes a level deeper — into the API parameters that shape what the model returns and how it thinks.
Two parameters stand out as particularly useful in production: response_format and reasoning_effort. The first gives you control over the structure of the model’s output. The second controls how much the model reasons before responding — which turns out to matter more than you might expect once you start caring about latency and cost.
Chat Completions API details # The Chat Completions API endpoint accepts a rich set of parameters. Most have sensible defaults and you will rarely touch them, but understanding what is available saves you from reaching for workarounds that already exist in the API. The table below covers the current non-deprecated parameters from the API reference:
Parameter Type Description model string ID of the model to use messages array Conversation history as an ordered list of messages response_format object Output format: text, json_object, or json_schema reasoning_effort string Reasoning intensity for reasoning models: low, medium, high temperature number Sampling temperature from 0 to 2; higher values produce more random output top_p number Alternative to temperature; nucleus sampling probability mass max_completion_tokens integer Maximum tokens the model may generate in the response n integer Number of completion choices to return stream boolean Stream partial responses as server-sent events stop string/array Sequences at which the API stops generating presence_penalty number Penalises new tokens based on whether they appear in the text so far frequency_penalty number Penalises new tokens based on their frequency in the text so far tools array List of tools (functions) the model may call tool_choice string/object Controls which tool the model calls seed integer Seed for deterministic sampling user string Unique identifier for the end user In this article we focus on response_format and reasoning_effort — two parameters with a direct, visible impact on production systems.
Information extraction with response_format # The response_format parameter controls how the model structures its output. The default is plain text. Setting it to json_object tells the model to return valid JSON, but gives you no control over the schema. Setting it to json_schema goes further: you provide a JSON Schema document and the model guarantees its output will conform to it. OpenAI calls this structured output.
For most of my career, integrating external intelligence into an application meant calling a rules engine, training a custom classifier, or encoding business logic that someone had painfully documented in a spreadsheet. The idea that I could describe a task in plain language and have a model respond with genuine reasoning was not something I expected to become production-ready in my working life. Then GPT happened, and it changed what backend developers need to know.
This article is the first in a series on using LLMs in Go. We start with the OpenAI Chat Completions API — the stateless, request-based interface that gives you direct control over every aspect of the conversation. By the end, you will have a working conversational agent that can call external tools to answer questions it otherwise could not.
A short introduction to ChatGPT # The path to large language models runs through a decade of incremental progress in deep learning. Early models like word2vec and GloVe learned to embed words into dense vector spaces, capturing semantic relationships between terms. The transformer architecture, introduced by Google in 2017, changed the trajectory of the field — it processes sequences in parallel using attention mechanisms that capture long-range dependencies far more effectively than recurrent networks. This architectural shift made it practical to train models on orders of magnitude more data. GPT-1 in 2018 showed that large-scale unsupervised pre-training followed by fine-tuning could match or beat purpose-built models across a range of language tasks.
Understanding what these models actually do removes a lot of the mysticism around them. An LLM is, at its core, a next-token predictor. It takes a sequence of tokens as input and outputs a probability distribution over the vocabulary for the next token. The transformer’s attention mechanism allows every token in the input to attend to every other token, building a rich contextual representation before making that prediction. Training adjusts billions of parameters to minimise prediction error across enormous text corpora. What emerges is a model with broad world knowledge encoded in its weights — not because it was taught facts directly, but because predicting text well requires internalising the structure of the world that produced that text.
ChatGPT is OpenAI’s conversational product built on the GPT model series. What set it apart from raw GPT-3 was the addition of reinforcement learning from human feedback (RLHF) — a technique that fine-tunes the base model to follow instructions and produce responses that human raters judge as helpful and safe. When ChatGPT launched in late 2022, it became one of the fastest-adopted consumer products in history. For developers, the more relevant artefact is the API behind it — specifically the Chat Completions API, which gives programmatic access to the same models powering the product.
Go 1.22 introduced range over functions, and Go 1.23 brought the iter package to go with it. Together they gave iterators a proper place in the language. Before that, iterating over custom data structures meant either returning slices upfront — loading everything into memory — or writing callback-based helpers that nobody could agree on naming. I have seen both approaches and neither felt right.
The core idea behind iterators is straightforward: instead of computing all values upfront and handing them back as a list, you compute each value on demand and yield it to the caller one at a time. The caller controls when to stop. This matters any time you are working with large or potentially infinite sequences.
This article walks through why iterators exist in Go, how the yield-based pattern works, what the iter package provides, and where the current limits of the feature sit.
Why do we need Iterators? # The simplest case for iteration is a slice of numbers. You range over it, print each value, move on.
func main() { numbers := []int{1, 2, 3, 4, 5} for _, i := range numbers { fmt.Println(i) } } // OUT: // 1 // 2 // 3 // 4 // 5 That works fine until the collection gets large. If you need to generate a million numbers, you have to allocate memory for all of them before you can even start ranging.
func main() { n := 1_000_000 numbers := make([]int, n) for i := range numbers { numbers[i] = i * 2 } for _, i := range numbers { fmt.Println(i) } } // OUT: // 1 // 2 // ... You can always add a break once you hit your threshold, but the damage is already done — the entire slice was allocated upfront. In other cases, you might not even know how many items you will need. The for range loop can iterate for some time, until it reaches the breakpoint, depending on some value provided in the item. In such cases, the size of such a list must be not just too big, but absolutely unpredictable.
func main() { n := 1_000_000 numbers := make([]int, n) for i := range numbers { numbers[i] = i * 2 } for _, i := range numbers { fmt.Println(i) if i > 10 { break } } } // OUT: // 1 // 2 // 4 // 8 // 10 // 12 In a real application, the decision about when to stop often happens dynamically — driven by user input, a timeout, or a condition that evaluates to true before the fifth item. Allocating a million items and then breaking on the fifth is wasteful. This is exactly the problem iterators solve.
With the release of Go 1.22, the Go standard library introduced several new features. As you might have noticed in articles related to the previous release, here we mostly concentrate on the new exciting packages and features that they give us. This article will start this journey, by providing a deeper look into the implementation of the version package in Go.
Lang # The first function we are ready to examine is the Lang function. This function provides a cleaned, valid Go version as a string. In case it can’t determine the actual version, due to an invalid state of the string value, it will return an string as a result.
Lang function
func Lang(x string) string As we can see the function signature above, function expects one argument, a string, that represents a Go version. An output should be also one value, a string, as a cleaned Go version.
Lang function examples
package main import ( "fmt" "go/version" ) func main() { fmt.Println(version.Lang("go1.0")) // go1 fmt.Println(version.Lang("go1")) // go1 fmt.Println(version.Lang("go1.22.4")) // go1.22 fmt.Println(version.Lang("go1.22.3")) // go1.22 fmt.Println(version.Lang("go1.22.2")) // go1.22 fmt.Println(version.Lang("go1.22.rc1")) // fmt.Println(version.Lang("go1.22rc1")) // go1.22 fmt.Println(version.Lang("1.22")) // fmt.Println(version.Lang("wrong")) // fmt.Println(version.Lang("")) // } In the example above, we can see how the Lang function adapt the Go version string. It removes all minor versions and appearance of “release candide” phrase, and present them in the end as an official Go versions that we experienced in the past (and we might experience in the future). In cases where we provided an invalid, or empty string, the ending result will be also an empty string, as the Lang function can’t find the actual version name.
One interesting point, not just for the Long function, but, as you will see, for all functions in this package, to consider some string as a valid Go version, it needs to have a prefix go.
IsValid # The next function we are examining is the IsValid function. This function checks a string with a potential Go version and returns a boolean result that tells us if the version is valid or not.
IsValid function
func IsValid(x string) bool As we can see the function signature above, function expects one argument, a string, that represents a Go version. An output should be a bool value, which tells us if the Go version is valid or not.
With the release of Go 1.21, the Go standard library introduced several new features. While we’ve already discussed some of them in previous articles, in this episode, we’ll dive into more advanced enhancements. Naturally, we’ll focus on the new functions designed for sorting slices, which are part of the new slices package. This article will provide a deeper look into the implementation of these three new functions and touch on benchmarking as well.
Sort # The Sort function is the first one we’d like to explore. This implementation is built upon the enhanced Pattern-defeating Quicksort, positioning it as one of the best-known unstable sorting algorithms. Don’t worry; we will discuss this “instability” aspect in this article. But first, let’s take a look at the function’s signature:
Sort function
func Sort[S ~[]E, E cmp.Ordered](x S) As we’ve seen in some other articles, nearly all improvements in the Go standard library are built upon generics, a feature introduced in Go version 1.18, almost three years ago. Similar to other functions, the Sort function also expects a slice of a generic type as an argument, where each item must adhere to the Ordered constraint. The function doesn’t return a new value but sorts the original slice in place. Below, you’ll find some basic examples:
Sort function examples
ints := []int{1, 2, 3, 5, 5, 7, 9} slices.Sort(ints) fmt.Println(ints) // Output: // 1 2 3 5 5 7 9 ints2 := []int{9, 7, 5, 5, 3, 2, 1} slices.Sort(ints2) fmt.Println(ints2) // Output: // 1 2 3 5 5 7 9 floats := []float64{9, 3, 5, 7, 1, 2, 5} slices.Sort(floats) fmt.Println(floats) // Output: // 1 2 3 5 5 7 9 strings := []string{"3", "9", "2", "5", "1", "7", "5"} slices.Sort(strings) fmt.Println(strings) // Output: // 1 2 3 5 5 7 9 In the example above, we can observe the result of the Sort method. All the outputs consist of sorted slices, arranged in ascending order. However, what makes this function particularly intriguing is its ability to handle various data types using a single function, distinguishing it from the implementations we already possess in the sort package. Now that we’ve examined the results, let’s proceed to compare the performance benchmarks with the existing package.
Benchmark # In this section, we aim to evaluate the performance of the new function by comparing it to the already existing sort package. Below, you’ll find the benchmark test results:
As part of the new Go release, several exciting changes have been introduced to the Go ecosystem. While we’ve explored some of these changes in other articles about the maps package and the cmp package, there’s much more to discover beyond these two packages.
In this article, we’ll focus on the first part of the slices package, specifically its new search functionality. Like many other updates and newly introduced packages, this one is also built upon the foundation of generics, which were introduced in Go 1.18.
BinarySearch and BinarySearchFunc # Let’s start by exploring the first pair of functions designed for efficiently searching a target value within sorted slices. In this context, we’re referring to the well-known Binary Search algorithm, which is renowned as one of the most significant algorithms and is frequently used in coding interviews. Below, you’ll find the signatures of both of these functions:
BinarySearch function
func BinarySearch[S ~[]E, E cmp.Ordered](x S, target E) (int, bool) BinarySearchFunc function
func BinarySearchFunc[S ~[]E, E, T any](x S, target T, cmp func(E, T) int) (int, bool) Looking at the signatures of both functions, we can identify some small differences between them, and these differences serve specific purposes. The first function, BinarySearch, expects two arguments. The first argument should be a slice of sorted items, and it must adhere to the Ordered constraint. When the items are ordered, the algorithm can efficiently compare them using the Compare function from the cmp package.
On the other hand, the second function, BinarySearchFunc, is more versatile. It allows searching within slices where the items don’t necessarily conform to the Ordered constraint. This flexibility is achieved by introducing a third argument, the comparison function. This function is responsible for comparing items and determining their order. It will be called by the BinarySearchFunc itself to make comparisons.
Both functions return two values. The first value is the index of the item within the slice, and the second is a boolean value indicating whether the item was found in the slice or not. Let’s explore some examples below:
BinarySearch examples
fmt.Println(slices.BinarySearch([]int{1, 3, 5, 6, 7}, 5)) // Output: // 2 true fmt.Println(slices.BinarySearch([]int{1, 3, 5, 6, 7}, 9)) // Output: // 5 false fmt.Println(slices.BinarySearch([]int{1, 3, 5, 6, 7}, -5)) // Output: // 0 false fmt.Println(slices.BinarySearch([]string{"1", "3", "5", "6", "7"}, "5")) // Output: // 2 true fmt.Println(slices.BinarySearch([]string{"1", "3", "5", "6", "7", "8"}, "9")) // Output: // 6 false fmt.Println(slices.BinarySearch([]string{"1", "3", "5", "6", "7"}, "4")) // Output: // 2 false Take a close look at the results returned by the BinarySearch function, especially when the item doesn’t exist in the slice. In our examples, we encountered four such cases where the function returned 0, 2, 5, and 6. When the requested item isn’t present in the slice, the function indicates where it should be positioned if it were to be added to the slice. Since the slice is sorted, it’s possible to determine the appropriate position for the item within the slice.
As the new release of Go came this summer, many of us started to look for the improvements inside its ecosystem. Many new features were introduced, including updates to the tool command to support backward and forward compatibility. New packages appeared inside the Standard Library, including maps and slices. In this article we are covering improvements introduced with the new cmp package.
The new package offers three new functions. All of them rely on Generics, a feature introduced in Go version 1.18, which has opened up possibilities for many new features. The cmp package introduces new functions for comparing values of Ordered constraint.
Let’s dive into each of them.
Ordered constraint and Compare function # The constraint Ordered encompasses all types that support comparison operators for values, specifically, <, <=, >= and >. This includes all numeric types in Go, as well as strings.
Ordered Constraint
type Ordered interface { ~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr | ~float32 | ~float64 | ~string } Once we understand what the Ordered constraint includes, we can focus on the first function from the cmp package, which is the Compare function. Below, you can find its signature:
Compare Function
// Compare returns // // -1 if x is less than y, // 0 if x equals y, // +1 if x is greater than y. // ... func Compare[T Ordered](x, y T) int The signature, along with the function description, makes it much easier to understand. The Compare function expects two arguments of the same type, compares their values, and returns a result that represents the comparison status:
-1 if the first argument is less than the second. 0 if the arguments’ values are equal. 1 if the first argument is greater than the second. Let’s prove such claim:
Compare numerals
fmt.Println(cmp.Compare(1, 2)) // Output: // -1 fmt.Println(cmp.Compare(1, 1)) // Output: // 0 fmt.Println(cmp.Compare(2, 1)) // Output: // 1 Compare strings
fmt.Println(cmp.Compare("abc", "def")) // Output: // -1 fmt.Println(cmp.Compare("qwe", "qwe")) // Output: // 0 fmt.Println(cmp.Compare("abcde", "abcc")) // Output: // 1 Above, we can see practical examples of the Compare function for both numerals and strings. Indeed, the return values can only belong to the set of numbers {-1, 0, 1}, as defined in the description.
Function Less # In addition to the function Compare, we got another, similar function Less. Although it’s rather easy to understand what is used for, let’s check its signature:
Not too long ago, we witnessed a new release of our favorite programming language. The Go team didn’t disappoint us once again. They introduced numerous new features, including updates to the tool command to support backward and forward compatibility. As always, the standard library has received new updates, and the first one we’ll explore in this article is the new maps package.
The new package offers only five new functions (two additional ones were removed from the package: Values and Keys), but they provide significant value. All of them rely on Generics, a feature introduced in Go version 1.18, which has opened up possibilities for many new features. The map package clearly provides new tools for Go maps. In this particular case, it introduces new functions for checking map equality, deleting items from maps, copying items into maps, and cloning maps.
Let’s dive into each of them.
Equal and EqualFunc # First, let’s examine the pair of functions used to check map equality: Equal and EqualFunc. The first one is a straightforward function that checks the equality of two provided maps as function arguments. The second one allows you to pass an additional argument that defines how you plan to examine the equality of values inside the maps. Here are their signatures:
Function Equal
func Equal[M1, M2 ~map[K]V, K, V comparable](m1 M1, m2 M2) bool Function EqualFunc
func EqualFunc[M1 ~map[K]V1, M2 ~map[K]V2, K comparable, V1, V2 any](m1 M1, m2 M2, eq func(V1, V2) bool) bool The Equal function is easier to understand. It simply defines two generic types, M1 and M2, which represent maps of two other generic types, K and V. Obviously, K is for map keys, and it allows any comparable value. The second type is V, representing map values, and it also allows being of a comparable type.
The EqualFunc function is slightly more complicated. First, it doesn’t assume that the values in the maps are of the same type, nor do they have to be comparable. For that reason, it introduces an additional argument, which is an equality function for the values in the maps. This way, we can compare two maps that have the same keys but not the same values, and we can define the logic for comparing if they are equal.
Simple usage of Equal function
first := map[string]string{ "key1": "value1", } second := map[string]string{ "key1": "value1", } fmt.Println(maps.Equal(first, second)) // Output: // true third := map[string]string{ "key1": "value1", } fourth := map[string]string{ "key1": "wrong", } fmt.Println(maps.Equal(third, fourth)) // Output: // false In the example above, there are no surprises. We use four maps to test the Equal function. In the first case, two maps are equal, but in the second case, their values are not the same. The following example is also easy.
My favorite part of software development is writing tests, whether they are unit tests or integration tests. I enjoy the process immensely. There’s a certain satisfaction in creating a test case that uncovers a function’s failure. It brings me joy to discover a bug during development, knowing that I’ve fixed it before anyone encounters it in a test environment or, worse, in production. Sometimes, I stay up late just to write more tests; it’s like a hobby. I even spent around 30 minutes on my wedding day writing unit tests for my personal project, but don’t tell my wife!
The only thing that used to bother me was dealing with integration issues between multiple Microservices. How could I ensure that two Microservices, each with specific versions, wouldn’t face integration problems? How could I be certain that a new version of a Microservice didn’t break its API interface, rendering it unusable for others? This information was crucial to have before launching extensive scenarios in our end-to-end testing pipeline. Otherwise, we’d end up waiting for an hour just to receive feedback that we’d broken the JSON schema.
Then, one day in the office, I heard a rumor that we were planning to use Contract Testing. I quickly checked the first article I found, and I was amazed. It was a breakthrough.
Contract Testing # There are many excellent articles about Contract testing, but the one I like the most is from Pactflow. Contract testing ensures that two parties can communicate effectively by testing them in isolation to verify if both sides support the messages they exchange. One party, known as the Consumer, captures the communication with the other party, referred to as the Provider, and creates the Contract. This Contract serves as a specification for the expected requests from the Consumer and the responses from the Provider. Application code automatically generates Contracts, typically during the unit testing phase. Automatic creation ensures that each Contract accurately reflects the latest state of affairs.
Contract testing After the Consumer publishes the Contract, the Provider can use it. In its code, likely within unit tests, the Provider conducts Contract verification and publishes the results. In both phases of Contract testing, we work solely on one side, without any actual interaction with the other party. Essentially, we are ensuring that both parties can communicate with each other within their separate pipelines. As a result, the entire process is asynchronous and independent. If either of these two phases fails, both the Consumer and Provider must collaborate to resolve integration issues. In some cases, the Consumer may need to adapt its integrational code, while in others, the Provider may need to adjust its API.