-
Notifications
You must be signed in to change notification settings - Fork 887
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak? #1456
Comments
FYI, the below is the output of
And the With pgx v4, the memory consumption seems almost constant. |
Could this (#845 (comment)) be somehow related? |
Maybe, but this is v5, which has some major re-writes. |
any updates on this? |
Not really enough to go on. Row descriptions are handled differently in v5. Column names are converted to strings instead of potentially pinning the entire read buffer to memory. I could see that having different memory usage patterns but nothing that should be leaking. Though for that matter, those pprof results indicate allocations, but don't necessarily indicate a leak. It might just be the difference from allocating 100 strings a query for the column names. |
I suspect I've run into a version of this bug: cached statements are never removed from the I believe the problem is the Example program that reproduces the bug by generating unique queries: package main
import (
"context"
"fmt"
"os"
"runtime"
"runtime/pprof"
"strings"
"github.com/evanj/hacks/postgrestest"
"github.com/jackc/pgx/v5"
)
func main() {
instance, err := postgrestest.NewInstance()
if err != nil {
panic(err)
}
defer instance.Close()
ctx := context.Background()
conn, err := pgx.Connect(ctx, instance.URL())
if err != nil {
panic(err)
}
defer conn.Close(ctx)
const commentLength = 256 * 1024
bigComment := "\n/*"
for len(bigComment) < commentLength {
bigComment += fmt.Sprintf("\nlen %d abcdefghijklmnopqrstuvwxyz", len(bigComment))
}
bigComment += "*/"
for i := int64(0); i < 100000; i++ {
query := fmt.Sprintf("select 'unique str %d'", i) + bigComment
var outputStr string
err = conn.QueryRow(ctx, query).Scan(&outputStr)
if err != nil {
panic(err)
}
if !strings.HasPrefix(outputStr, "unique str ") {
panic(outputStr)
}
if i%5000 == 0 {
var stats runtime.MemStats
runtime.ReadMemStats(&stats)
fmt.Printf("iteration=%d num_gc=%d heap_alloc=%d heap_objects=%d heap_sys=%d\n",
i, stats.NumGC, stats.HeapAlloc, stats.HeapObjects, stats.HeapSys)
}
if i == 5000 {
f, err := os.Create("heap.pprof")
if err != nil {
panic(err)
}
err = pprof.WriteHeapProfile(f)
if err != nil {
panic(err)
}
err = f.Close()
if err != nil {
panic(err)
}
fmt.Println(" wrote heap profile")
}
}
} |
Previously, items were never removed from the preparedStatements map. This means workloads that send a large number of unique queries could run out of memory. Delete items from the map when sending the deallocate command to Postgres. Add a test to verify this works. Fixes #1456
pgx version: v5.2.0
PostgreSQL version: 13.9
I have an application where I repeatedly query from a table in PostgreSQL. I noticed the amount of RAM being consumed by the application increasing linearly overtime.
Here is the pprof's result for top:
Here is the png of the heap

The number of records queried every iteration is around 20-25 and a record contains around 100 columns.
Here is the pprof's result of top after 30 minutes from the previous result:
And the png of the heap

The text was updated successfully, but these errors were encountered: