Search

Top 60 Oracle Blogs

Recent comments

pgbench

pgbench retry for repeatable read transactions — first (re)tries

pgbench retry for repeatable read transactions — first (re)tries

Trying a no-patch solution for pgbench running on repeatable read transactions, using a custom script with PL/pgSQL

In a previous post I was running pgBench on YugaByteDB in serializable isolation level. But Serializable is optimistic and requires that the transactions are re-tried when failed. But pgBench has no retry mode. There was a patch proposed in several commit fests for that, but patch acceptance is a long journey in PostgreSQL:

WIP: Pgbench Errors and serialization/deadlock retries

Running pgBench on YugaByteDB 1.3

Running pgBench on YugaByte DB 1.3

My first test on this Open Source SQL distributed database.

Did you hear about YugaByteDB, a distributed database with an architecture similar to Google Spanner, using PostgreSQL as the query layer?

I started to follow when I’ve heard that Bryn Llewellyn, famous PL/SQL and EBR product manager, left Oracle to be their developer advocate. And YugaByteDB got more attention recently when announcing that their product license is now 100% Open Source.

Do you know what you are measuring with pgbench?

pgbench flamegraph

pgbench is a benchmark application for PostgreSQL. You define some parameters for the workload (read-only, volume of data, number of threads, cursor sharing, …) and measure the number of transactions per second. Pgbench is used a lot when one wants to compare two alternative environments, like different postgres version, different platform, different table design,…

However, a scientific approach should go beyond the simple correlation between the observed performance (transactions per seconds) and the configuration. Without a clear analysis and explanation on the cause-consequence, we cannot extrapolate from a single set of observations to a general recommendation. The goal of this post is to show what is behind this ‘transaction per second’ measure.