Prague PostgreSQL Developer Day 2026
This workshop is an introduction into hacking on Postgres code, to help people with their first steps. It is intended for those who are interested either in contributing to Postgres directly, or plan to work on some extensions, Postgres forks etc. If you don't plan to work on Postgres internals and write C code, this workshop may not be a good fit for you.
Would you like to write queries that perform fast and deliver results on time? During this workshop, you will learn that query optimization is not a dark art practiced by a small group of sorcerers. Any motivated professional can learn to write efficient queries from the get-go and capably optimize existing queries. You will learn to look at the process of writing a query from the database engine’s point of view and know how to think like the database optimizer.
In this hands-on, 3-hour workshop, participants will learn how to deploy and manage PostgreSQL in Kubernetes using CloudNativePG (CNPG), the open source operator in the CNCF sandbox. The workshop will walk through the main features to manage a PostgreSQL cluster, from deploying the first CNPG cluster to managing configuration, databases, and roles in a declarative way, to enabling continuous backups for recovery from a disaster.
Attendees will manage the PostgreSQL cluster by applying the changes from the YAML manifest definition, demonstrating the declarative nature of the Operator. We will cover basic DBA managements, from DB initialization, to declarative SQL execution and DB import from other PG source, to reading the aggregated logs for investigation. Moreover we will show the benefits of using the CNPG operator for taking and restoring backups, and for its self-healing capabilities in case of incidents. This session is ideal for DevOps engineers, DBAs, and developers looking to operationalize PostgreSQL in Kubernetes.
PgBouncer is widely used in PostgreSQL environments. Like any connection pooler, it has clear strengths and limits, and it has been applied across a range of production patterns. Recent releases added support for protocol‑level prepared statements in transaction and statement pooling; in 2025 the project also made progress toward a multithreaded design.
In this half‑day workshop we examine how PgBouncer’s current single‑threaded event loop works and outline the proposed multithreaded approach. We’ll cover practical and experimental use cases and the configuration edges that matter in production—like scaling across cores with multiple processes on the same port via so_reuseport and peering and other interesting solutions. Attendees receive runnable examples for the key use cases as well as commented list of online resources.
Key takeaways:
* Know the trade‑offs of this connection pooler
* Prepared statements can work in transaction/statement pooling
* PgBouncer is single‑threaded today; scale across cores with multiple processes
* PgBouncer can help with HA/failover — but not alone
* A multithreaded architecture is on the horizon
In this hands-on workshop, attendees will learn how to set up and operate pgwatch, a powerful open-source monitoring system designed specifically for PostgreSQL. Starting from a clean environment, we will go step-by-step through deploying pgwatch, connecting monitored databases, and exploring built-in Grafana dashboards. Participants will also learn best practices for managing pgwatch efficiently in production environments—covering configuration, data retention, and performance considerations.
This talk introduces the key access paths for B-tree indexes in PostgreSQL. Based on over ten years of experience with Oracle’s index skip scan, I will highlight common problems it can cause and show practical ways to avoid them in PostgreSQL. I will briefly demo DB Booster, a tool that automatically finds missing indexes, and discuss the future of index tuning in relational databases—self-tuning engines and PostgreSQL’s place in that shift.
Modern applications often rely on message queues - for background jobs, data pipelines, notifications, and event-driven architectures. Using something external like Kafka, Redis, RabbitMQ, etc increases operational complexity and introduces new failure modes. It all could be avoided by keeping a message queue in a database.
Quick research on the internet shows that developers commonly are trying to engineer the database queue based on SELECT … FOR UPDATE SKIP LOCKED (available since 9.5). This approach works reasonably well under small load, and spectacularly falls apart if subscribers can’t keep up with publishing rate. PostgreSQL can do better - and in fact, it already did. PgQ is PostgreSQL extension that provides generic, high-performance lockless queue with simple SQL.
In this talk, we start with why common SELECT … FOR UPDATE SKIP LOCKED approaches fall apart under load, and how PgQ quietly solved those problems a couple decades ago. Then we take a deep look at PgQ internals: snapshot-based event reads, transaction-ordered delivery, and how PgQ gets away with just a single index to achieve high throughput and consistency. Finally, we will discuss practical patterns for running PgQ on managed PostgreSQL services where this extension is typically not available.
https://github.com/pgq/pgq/
Large PostgreSQL tables never shrink on their own and deleting data at scale can easily cause more harm than good.
This talk focuses on how to safely clean up data in production without blocking workloads or causing vacuum storms.
We’ll explore real cleanup patterns , from partition drops and rebuild-and-reattach approaches to cursor- and CTE-based batched deletions , each designed to minimize lock contention and replication lag.
We’ll also examine what actually happens inside PostgreSQL when data is deleted: how tuples become dead, how autovacuum reclaims space, and why transaction age and WAL behavior matter for performance.
By the end, attendees will understand how to plan, execute, and automate cleanup in large, high-concurrency PostgreSQL environments safely, continuously, and with full awareness of the engine’s internals.
Everyone knows how to prevent basic SQL injection but modern attackers have moved far beyond textbook exploits. In high-traffic PostgreSQL deployments, subtle misconfigurations and overlooked features can open doors to far more sophisticated attacks.
This talk uncovers the next generation of database threats that rarely make it into security checklists. We’ll examine:
* Privilege Escalation via Extensions and Foreign Data Wrappers how seemingly harmless extensions or FDWs can leak credentials or access external systems.
* Timing and Side-Channel Attacks : extracting secrets by measuring query latency and caching behavior.
* Abusing Logical Replication and LISTEN/NOTIFY : stealthy data exfiltration channels hidden in plain sight.
* Role Inheritance & Row-Level Security Pitfalls : ways attackers exploit complex permission hierarchies.
Attendees will learn how to recognize these attack surfaces, configure PostgreSQL securely, and implement defense-in-depth strategies such as strict role design, immutable infrastructure, and continuous auditing.
Whether you’re a DBA, developer, or security engineer, this session will challenge the assumption that SQL injection is the only real database risk and provide actionable steps to harden your PostgreSQL environment against today’s most overlooked threats.
Upgrading PostgreSQL in production is rarely about tools, it’s about timing, trust, and everything that can break in between.
At Fresha we had quute a few of Postgres 12 databases, streaming changes through Debezium into Kafka, serving live traffic around the clock. With end-of-life approaching, “just run pg_upgrade” wasn’t an option.
We built a blue-green upgrade process based on logical replication, sequence offsets, connector handover, and PgBouncer choreography: a way to move clusters forward without downtime or broken streams.
This talk shares how we made it repeatable: YAML configs, dry-run rehearsals, reversible scripts, and a few hard lessons learned along the way.
When the switch finally happened, no one noticed. That’s how you know it worked.
Postgres adds about 180 features and changes every year, yet it is missing some major ones. This talk explains what those features are, and why they have not been implemented. The features include sharding, TDE, global indexes, and multi-master replication.
The story of a database engineer coming to the world of strongly typed programming languages. How have I found my favorite way of building Go services on top of PostgreSQL database. Introduction to sqlc, what's so great about it, what's not ideal and how to fix that and where it fails completely.
The session provides a walkthrough of key TimescaleDB features used for IoT data collection and visualization. It explores the functionality and benefits of continuous aggregates (CAGGs), data compression, and data retention - highlighting how these features relate to each other and how to maintain them effectively.
The talk also compares approaches tailored to different data characteristics (dense vs. sparse datasets) and discusses how these choices impact downstream tools such as Grafana dashboards.
Have you ever applied PostgreSQL tuning advice only to see no improvement—or made things worse? While generic PostgreSQL wisdom is valuable, the complexity of PostgreSQL makes catch-all solutions underperform in unexpected ways.
I will share examples where one PostgreSQL configuration improved performance in one system but hurt it in another—even for the same workload. The key insight: optimal PostgreSQL parameters depend heavily on your specific infrastructure characteristics. I'll present a checklist of important infrastructure differences—local vs network storage, IOPS limits, JIT availability, cloud vs on-premise—and demonstrate how these different environments require different optimal configurations for the same workload.
You'll leave understanding why generic tuning guides often fail and what infrastructure characteristics you need to consider when tuning YOUR specific PostgreSQL system.
- PostgreSQL Compatibility Index / Mayur
- Lutra language compiles to SQL / Aljaž Mur Eržen
- Public Role - Critical Flaw / Michal Bartak
- Boldly Migrate to PostgreSQL with credativ-pg-migrator / Josef Machytka
- Making Access Control Management Easy: pg_acm / Hettie Dombrovskaya
- Shortest path using roaring bitmaps / Ants Aasma
- How did I end up using Postgres? / Serge
- pg_dance: Normalized Choreography with CloudNativePG / Jonathan Battiato and Ellyne Phneah
- PostgreSQL tuning meets Kubernetes: What breaks and how we automated It / Mohsin Ejaz
- Guessing application owner desires How to select important query groups without asking the app owner / Luigi Nardi
- Why pg_trgm Returned Zero Results: The ARM vs x86 Trap / Anton Borisov
Some transaction processing workloads end up with horrible lock contention because they end up blocked on updating the same rows. There are now databases that advertise running this workload a 1000 times faster than PostgreSQL. In this talk will discuss strategies how to manage this contention in PostgreSQL while retaining application correctness. How network latencies, different isolation levels, optimistic and pessimistic concurrency control, deadlocks and livelocks affect the capability to get work done. Working with the database allows us to take a large step closer to single-purpose database performance while staying in our familiar PostgreSQL land.
Picture this: you start a new role, eager to learn and contribute with your ideas! Your next task is to get familiar with the database setup, and then you start encountering these massive PostgreSQL databases — 100TB, 200TB, 300TB...
And you start questioning yourself: how do you backup (and restore) a +100TB database? And how about HA? Performance? Vacuum?
It should work the same way as for a 100GB database, right? Well, maybe not exactly.
Blog posts and best practice guides make PostgreSQL seem straightforward—until you push it to its limits. At extreme scale, you will find yourself questioning the most fundamental assumptions about how PostgreSQL works.
Over the last years, my team at Adyen has been exploring the boundaries of what PostgreSQL can do, and today I will share our findings with you (at least the ones I can!).
PostgreSQL 18 introduced significant enhancements to constraints, your first line of defense for maintaining data integrity. This talk focuses on the new capabilities brought by version 18, including temporal PK/UK and FKs, NOT NULL constraints being promoted to first-class constraints, new NOT ENFORCED constraints, and improved support for partitioned tables. We’ll look at what’s new, why it matters, and how to apply these features in real-world systems.
We’ll begin with a brief refresher on the different types of constraints to help you get the most out of PostgreSQL’s declarative integrity model by looking at the pg_constraint catalog.
Then we’ll go into the details of what’s new in PostgreSQL 18 and what you should take away from these changes.
PostgreSQL, renowned for its extensibility and robust feature set, offers a wealth of server configuration parameters that can significantly influence its performance. Most PostgreSQL users operate with default parameters, which are often not optimized for commonly used server machine flavors. This talk is a comprehensive benchmark analysis exploring the potential performance benefits achievable through server parameter tuning. We'll examine the impact of various parameter configurations on diverse workloads, ranging from OLTP to OLAP, and provide an analysis that highlights the performance gains and show that these depend on several factors related to the computing environment and workloads. This talk helps database administrators and developers with actionable insights for maximizing PostgreSQL performance through effective server parameter tuning.
In the current era of cloud computing, the days of managing single-app systems with standalone databases are long gone. Instead, we find ourselves responsible for overseeing complex systems containing hundreds of services and dozens of databases. Adding to the challenge is the dynamic nature of these systems, where databases can migrate between nodes, and their storage performance can be adjusted on the fly.
In this presentation, we will explore strategies for maintaining observability of Postgres databases in complex cloud environments. Topics include leveraging eBPF for generating comprehensive service maps and instrumenting Postgres calls across various client applications to specific databases. Additionally, we explore automatic discovery mechanisms for topologies within High Availability (HA) Postgres clusters, facilitated by different Kubernetes Operators. Furthermore, the talk addresses the importance of monitoring underlying cloud resources, encompassing virtual machines (VMs), storage volumes, and network components.
Follow along as we deep dive into a real-world transaction wraparound incident, discuss recent Postgres innovations, and explore features like index de-duplication and autovacuum enhancements designed to help eliminate the problems. We'll talk about how to optimize your Postgres environment and make a strong case why upgrading might just eliminate your XID wraparound risks.
SQL queries often lack systematic testing - they're treated as "just glue code" that only gets validated in production. Meanwhile, PostgreSQL itself has used robust regression testing for decades to prevent disasters in core development. This talk introduces RegreSQL, a tool that brings the same regression testing methodology to application queries, catching both correctness bugs and performance regressions before deployment.
We'll explore how RegreSQL tests SQL queries systematically: verifying correctness across schema changes, tracking performance baselines, detecting common query plan issues (sequential scans, missing indexes), and managing reproducible test data. You'll see live demonstrations of catching real-world issues - from missing indexes that cause production slowdowns to ORM-generated queries that perform sequential scans on millions of rows.
Whether you write raw SQL or use ORMs, whether you're maintaining legacy systems or building greenfield applications, this talk will show you practical techniques for making your PostgreSQL queries testable, maintainable, and production-ready.
Fuzzing is a simple but powerful technique for discovering edge-case bugs in large, stateful systems like PostgreSQL.
This talk shows how to apply it to Postgres’ client library libpq and commonly used connection pooler PgBouncer - both handle every network connection before the server sees a query.
We’ll walk through building minimal harnesses, generating and mutating protocol inputs, and reasoning about what makes fuzzing effective on complex C codebases.
The session is meant as a practical guide: how to start fuzzing a Postgres-related project, what challenges to expect, and what kind of issues you can realistically uncover along the way.
You may know that your database is slow, or, you may be told that there are performance problems in the database. However, how do you know where the performance problems are? Which queries are running the slowest? Why are they running slow? This session will take you through the fundamental tools that are built right into PostgreSQL that can help you answer all these questions. We'll start with using queries against the Cumulative Statistics Systems. We'll begin an exploration of explain plans. Setup and guidance for how all these tools work will be provided along the way. You can finally know which queries are slow, and why they are slow.