The focus of this document is profiling/problem solving of a running server.
The official documentations heap profiles
Go profiles are collection of stack traces taken from a running go program, that we want to analyze. There are different kinds of go profiles, as will be explained later.
You need to add the following code to your executable:
import _ "net/http/pprof"
import "net/http"
import "os"
func main() {
// need to start this initialization for using prof...
if os.Getenv("LISTEN_PPROF") != "" {
go func() {
http.ListenAndServe("localhost:6060", nil)
}()
}
}in more detail:
- Here profiling is enabled, on condition that
LISTEN_PPROFenvironment variable is set - it starts a go routine that starts a web server on port 6060 (http.ListenAndServer). A go routine is running in the background.
- note that a nil parameter is passed for the function parameter that that would normally get a function for serving of the http request. This means that the default handler DefaultServeMux is called
- important detail:
import _ "net/http/pprof"includes thepprofpackage - this tells DefaultServeMux how to handle http requests for information.
Lets look at the source code link
- when the package is included, the
initfunction it runs. It looks as follows:
func init() {
prefix := ""
if godebug.New("httpmuxgo121").Value() != "1" {
prefix = "GET "
}
http.HandleFunc(prefix+"/debug/pprof/", Index)
http.HandleFunc(prefix+"/debug/pprof/cmdline", Cmdline)
http.HandleFunc(prefix+"/debug/pprof/profile", Profile)
http.HandleFunc(prefix+"/debug/pprof/symbol", Symbol)
http.HandleFunc(prefix+"/debug/pprof/trace", Trace)
}Go profiling is served by http.HandleFunc(prefix+"/debug/pprof/profile", Profile)
However the other handlers are interesting too:
-
/debug/pprof/cmdline(access by curl http://localhost:6060/debug/pprof/cmdline) gets you the command line parameters of the running process, separated by NULL bytes. -
pattern
/debug/pprof/(access by curl http://localhost:6060/debug/pprof) gives you an html page that lists all available kinds of profiles, with a description, that sometimes rivals that of the official documentation in clarity! -
each profile is the parameter that comes next in the url. The following profiles return information on the current state of what happens at the time of the the http request:
Url parameter is debug=1 - this tells it to return text formatted output, which is easier to look at compared to default binary output. (the binary format is useful for a tool called pprof)
| Profile name | Curl command | Explanation |
|---|---|---|
| goroutine | `curl 'http://localhost:6060/debug/pprof/goroutine?debug=1'` | The stack trace of all running go routines |
| mutex | `curl 'http://localhost:6060/debug/pprof/mutex?debug=1'` | stack trace of go routines that are now holding a mutex / are now waiting on a synchronization primitive |
Other profiles return some accumulated information of past events, but you can pass an additional parameter that requests information for events during the past N seconds by passing an additional seconds=N parameter to the URL!
This seconds=N parameter works for: allocs, block, goroutine, heap, mutex, threadcreate (these profiles are also called cumulative profiles)
| Profile name | Curl command | Explanation |
|---|---|---|
| threadcreate | `curl 'http://localhost:6060/debug/pprof/threadcreate?seconds=3&debug=1'` | The stack trace of go routines that created a goroutine/thread during the last three seconds |
| alloc | `curl 'http://localhost:6060/debug/pprof/alloc?seconds=3&debug=1'` | The stack trace of go routines that performed an allocation during the last three seconds |
| heap | `curl 'http://localhost:6060/debug/pprof/alloc?seconds=3&debug=1'` | This profile is a subset of alloc. The stack traces of allocations that produced a currently living object during the last three seconds are returned here. |
| block | `curl 'http://localhost:6060/debug/pprof/block?seconds=3&debug=1'` | `block` profile is a superset of `mutex. The stack traces of calls that resulted in blocking states are returned - for last three seconds. |
The following one line script gathers result into file prof.log - every second you get a new entry into the file. This script uses the goroutine profile - which lists all running go routines.
rm prof.log; while [ true ]; do date | tee -a prof.log; curl 'http://localhost:6060/debug/pprof/goroutine?debug=1' | tee -a prof.log ; sleep 1s; donefor cumulative profiles (like 'threadcreate') you might want to add the parameter for covering the period of the last second, like
rm prof.log; while [ true ]; do date | tee -a prof.log; curl 'http://localhost:6060/debug/pprof/threadcreate?debug=1&seconds=1' | tee -a prof.log ; sleep 1s; doneFirst thing you want to do: see if the number of go routine does not grow without bounds. Leaking go routines is a very bad thing, for performance - and should be fixed as a first priority.
Most simple way as follows:
grep -B 1 'goroutine profile:' prof.logThis gives you something like this:
--
Wed Dec 31 13:49:53 IST 2025
goroutine profile: total 76
--
Wed Dec 31 13:49:54 IST 2025
goroutine profile: total 76
--
Wed Dec 31 13:49:55 IST 2025
goroutine profile: total 76
In this repository build the program parseprof.go (source code link)
go build -o parseprof
Run the program to create an html file for the call graph. For each node in the call graph: the children of the node are sorted by number of occurrences of the child node.
./parseprof -in prof.log -out prof.html
Display the resulting html file in a web browser
You can use a standard tool, instead of gathering profile responses.
The following will display the top N functions that did a memory allocation
go tool pprof -top http://localhost:6060/debug/pprof/heap.
Instead of gathering data from a running server: you can gather the events between the calls to pprof.StartCPUProfile and pprof.StopCPUProfile and look at the resulting file with pprof (this is great for looking at isolated problems, in unit tests or batch processes)
How to do that is explained in great detail here:
Also Julia Evans wrote an interesting explanation of pprof - as I found out lately.