scottcodie 5 hours ago

I've spent my entire career developing databases (oracle, cassandra, my own database startup). Knowing if your workload is read or write heavy is one of the first questions when evaluating database choice, and is critical for tuning options. I would give this article hate just because it feels partially written by AI and the title needs a possessive 'your' in it, but its core ideas are sound and frame the issue correctly.

  • EGreg 2 hours ago

    What about asking whether it should be a row database (sqlite) or column database (duckdb)...

    for a data lake or analytics prefer columns? / read heavy

    what would a row based db be better for? OLTP / write heavy?

    • gigatexal 2 hours ago

      General rule of thumb is OLAP (DuckDB, BigQuery, redshift, etc) db’s are better at reads (think analytics) and OLTP (Postgres and MySQL and salute) ones are better for writes (think order systems, point of sale).

      Things get muddied when things like the HTAP stuff are bandied about where they promise the best of both worlds.

      • ownagefool an hour ago

        Ordering System is a good example because you typically want both. Your base logic will probably exist in OLTP with joins and normalised data, and you'll generally have local on-device OLTP databases.

        Reporting on your Ordering System is an OLAP problem though. Generally an OLAP database stores data on disk in a way that it only needs to read the selected columns and the performance is better with wider columns, i.e. lots of duplicated data ( JOINs are slow ).

        So like, you select * from Customer, Order, Items, Device, Staff, stick it in your OLAP database that's where customers should generate reports. This both makes reporting more performant, but it also removes the problem from the critical path of your POS device syncing and working.

        This has the added benefit that updating your product name won't update the historical log of what was done at the time, because what was done at the time was done at the time ( but you can still map on like productId if you think the data is relevant. )

        At scale you want to pop the writes on a queue and design those devices to be as async as possible.

        This is what happens when you just build it pure OLTP.

        https://www.linkedin.com/pulse/nobody-expects-thundering-her...

        This was an ~£19m ARR POS company dying because of architecture, now doing £150m+ ARR. ( the GTV of the workloads are multiple times that, but I can't remember them ).

Normal_gaussian 9 hours ago

At the time of writing the query has a small error. The filter is checking for reads and writes, but it should be reads or writes.

    WHERE
     -- Filter to only show tables that have had some form of read or write activity
    (s.n_tup_ins + s.n_tup_upd + s.n_tup_del) > 0
    AND
     (si.heap_blks_read + si.idx_blks_read) > 0
 )
Should be OR
  • v3ss0n 44 minutes ago

    Yeah definitely AI

spprashant 5 hours ago

The thing we really strive for with Postgres is to keep the UPDATE traffic as low as possible. Because of MVCC, table bloat and the subsequent vacuum jobs will kill your IO even further. This means designing the applications and data model in a way that most write traffic is INSERT, with occasional UPDATEs which cannot be avoided. If you know you are going to have a UPDATE heavy table, be sure to set the fill_factor on the table ahead of time to optimize for it.

Also, in my experience "Faster SSD Storage" point applies to both read and write heavy workloads.

  • perfmode 2 hours ago

    do upserts (on conflict do) count as update?

    • whatevaa 21 minutes ago

      If they update, yes.

SteveLauC 6 hours ago

Regarding write-heavy workloads, especially for Postgres, I think we really need to distinguish between INSERTs and UPDATEs, because every update to a tuple in Postgres duplicates the whole tuple due to its MVCC implementation (if you use the default heap storage engine)

  • brightball 5 hours ago

    One thing that catches people by surprise is that read heavy workloads can generate heavy writes.

    Queries that need to operate on more data than will fit in the allocated working memory will write to a temporary table on disk, then in some cases perform an operation on that temporary table like sorting the whole thing and finally, after it's done delete it which is even more disk write stress.

    It's not really about whether it's ready heavy or write heavy, it's about whether it's usage creates Disk I/O stress.

    You can write millions of increment integers and while technically that's "write heavy", there's no stress involved because you're just changing the value in a defined space that's already been allocated. Update space that is more dynamic, like growing a TEXT or JSON field frequently...it's a different story.

wirelesspotat 7 hours ago

Agree with other commenters that the title is a bit confusing and should be renamed to something like "Is your Postgres workload read heavy or write heavy?"

But title aside, I found this post very useful for better understanding PG reads and writes (under the hood) and how to actually measure your workload

Curious if the tuning actions any different if you're using a non-vanilla storage engine like AWS Aurora or GCP AlloyDB or Neon?

J_McQuade 9 hours ago

This, as a few other commenters have mentioned, is a terrible article.

For a start, the article does not mention any other database. I don't know how you can say something is read or write heavy without comparing it to something else. It doesn't even compare different queries on the same database. Like, they just wrote a query and it does a lot of reads - so what? There's nothing here. Am I going mad? Why does this article exist?

  • acscott 8 hours ago

    A little context may be of help. Maybe a better headline for the article would have been, "How Can You Determine if your PostgreSQL Instance's Workload is Read-Heavy or Write-Heavy?" It's useful to know to help optimize settings and hardware for your workload as well as to nkow whether an index might be useful or not. Most major DBMSs will have some way to answer this question, the article is aimed at PostgreSQL only.

moomoo11 8 hours ago

This article quality makes me not trust the company.

jagged-chisel 10 hours ago

> When someone asks about [database] tuning, I always say “it depends”.

Indeed. On your schema. On your usage. On your app. On your users.

  • mikepurvis 9 hours ago

    If it didn’t depend they’d just make the “tuned” value the default.

    • acscott 8 hours ago

      Exactly. The parameters you can configure are there due to a lack of automating those since what you want to optimize for might be different than an automaton would.

rednafi 10 hours ago

Despite using CTEs, I found the first query quite impenetrable. Could be because I don’t spend that much time reading non-trivial SQL queries.

I’ve been mostly using the `pg_stat_statements` table (the second technique) to find out whether my workload is read or write heavy, it’s plenty good in most situations.

  • teej 9 hours ago

    pg_ system tables aren’t built for direct consumption. You typically have to massage them quite a bit to measure whatever statistic you need.

alberth 11 hours ago

Odd that OLTP wasn’t mentioned in the article.

Postgres an an OLTP databases, which are designed for write heavy workloads.

While that being said, I agree most people have read-heavy needs.

  • da_chicken 10 hours ago

    I disagree. I think the only people that have read-heavy needs are big data and data warehouses. AI being hot right now doesn't mean big data is the only game in town.

    Most applications are used operationally or have a mix of read and write. Even on applications where the user can only consume content present there, there is often more than enough data capture just tracking page history to be relatively write heavy.

    • withinboredom 10 hours ago

      Hmmm. Not really. Yes, everything is a mix, but for applications, it very much is on the read-heavy side. Think about how many queries you have to do just to display an arbitrary page. You might, maybe, just maybe, net 2-3 writes vs. hundreds of reads. If that starts to balance out, or even flip, then you probably need to rethink your database as you start to exit traditional db usage patterns. But <30% writes is not write-heavy.

      • da_chicken 8 hours ago

        I am thinking about that. I don't think most data is read that often in an OLTP system.

        I think a very small amount of data is read very often. However, until your DB gets very large, that data is going to end up as data pages cached in memory. So that data is extremely cheap to read.

        I also think a significant amount of data that is generated in an OLTP system is written and never read, but you still had to pay the cost to write it. If you have an audit log, chances are you never need to look at it for any one piece of data. But you definitely had to write all the metadata for it.

        But I'm also assuming that writes are at least 10 times as expensive as reads. More so if what you're modifying has indexes, since indexes are often functionally identical to a partial copy of the entire table. Indeed, I think that 10 times mark is conservative. Most RDBMSs use transaction logging and some kind of locking on writes. There's data validation and integrity checks on inserts and updates (and deletes if you have foreign keys).

        I think 1 write to 10 reads is still write-heavy.

        • bigiain 5 hours ago

          > I think 1 write to 10 reads is still write-heavy.

          Pretty easy to tune the suppled SQL query to suit your opinion.

          Pretty sure you just need to tweak the 2nd line

          ratio_target AS (SELECT 5 AS ratio),

    • hinkley 8 hours ago

      I think read replicas disagree with that pretty strongly.

      The write traffic may be very write heavy, but then you have many, many users who need to see that data. The question is whether the database or a copy of the data from the database is what services that interest.

      If you mediate all reads through a cache, then you have split the source of truth from the system of record. And then the read traffic on the system of record is a much lower ratio.

gdulli 11 hours ago

Is a ball red or green? How long is a piece of string?

  • phalangion 9 hours ago

    Did you read the article? It’s about how to tell if your database is read or write heavy.

    • johncolanduoni 8 hours ago

      I think a large part of what people are responding to here is the title, which comes off as something someone who doesn't actually understand the nature of a database workload would write. It may be a simple typo, but "Is YOUR Postgres Read Heavy or Write Heavy?" is the question that can have an answer. "Is Postgres More Appropriate for Read Heavy or Write Heavy workloads?" would also be fine, but it would be a totally different article from the written one.

Cupprum 9 hours ago

Surprising amount of downvoted comments under this article. I wonder why

lysace 10 hours ago

Insipid text.

Also: HN needs to upgrade its bot upvoting detection tech. This is embarrassing. It was proper ownage of the HN #1 position for like 15 minutes straight. And then like #2-3 for an hour or so.

  • zug_zug 10 hours ago

    Off topic, but I do feel like there is a significant number of things that mysteriously get to frontpage with 12-40 upvotes, zero comments, and then sit there getting no more upvotes / comments for like 20 minutes.

    Personally I agree that it's both possible to detect this better and would actually drastically improve the quality of this site if that wasn't the meta and think it's something that should be openly discussed (in terms of practical suggestions).

    • add-sub-mul-div 9 hours ago

      They don't care what gets ranked where other than their own recruitment and other submissions, for which this site exists.

      • Waterluvian 9 hours ago

        I don’t think this holds up to higher order thinking. On the surface, sure that makes sense.

        But then who left to look at the recruitment ads if the quality of the content, comments, and community degrades enough that everyone stops coming?

        All I know is that pretty much nobody here knows enough about the whole picture to have a meaningfully informed opinion. So a lot of these opinions are going to be driven by their imagination of the whole picture.

    • lysace 9 hours ago

      It is so incredibly obvious when see it, yes.

developper39 11 hours ago

Very usefull, and it is clear that the author knows what he is talking about. Nice intro to Pg18 too.