Ask HN: Is PostgreSQL better than MySQL?
herodoturtle
2 years ago
83
79
I hope this doesn't entice a flamewar, but speaking honestly here, whenever I discuss this (PostgreSQL vs MySQL) with folks working in the data space, PostgreSQL is always touted as the superior RDBMS.

Now I know that MySQL is the most popular (because there's a banner on mysql.com that says so! /s), and I get the sense that on raw speed alone, it's probably faster (this is purely anecdotal), but folks that use PostgreSQL speak about it with an air of superiority.

So I'm hoping some of you HN users can shed light on this, based on your experience.

Is there a clear winner, or is it purely a case of "it depends"?

(and if it's the latter, is there a generally agreed comparison list of pros and cons that I can refer to?)

Thanks!

gtsteve2 years ago
While I use MySQL extensively, I don't really know Postgres so I can't really compare the two. In my current project, I opted to use MySQL because Amazon Aurora didn't support Postgres at the time and I wanted replication performance and serverless autoscaling for a low level read workload that could be randomly very spiky.

Today, both databases are well supported as a service on major cloud platforms. Postgres does have some performance characteristics that lead Uber to drop it [0], but this in itself is not a reason to not choose it, because most teams don't have their kind of workload.

If I were to start a new project today however I would seriously consider Postgres given that Oracle is in my opinion fairly hostile. For example, they re-licensed the .NET driver from LGPL to GPL so that anyone updating would either have to buy a commercial license or open-source their entire product. While there are OSS .NET drivers, this move alone is enough to make me think twice about using it again for a future project.

[0] https://www.uber.com/en-RO/blog/postgres-to-mysql-migration/

thinkxgtsteve2 years ago
take a peek at https://github.com/orioledb/orioledb if the Uber blog is your frame of reference
znpy2 years ago
They’re fairly different, and have different pros and cons.

The main pro about postgresql is that it’s immensely versatile and can do an incredible amount of things.

The main pro about mysql is that there are some open source and battle-tested multi-master extensions for high availability.

mikece2 years ago
It's been a while since I've given serious thought to this question but I'm glad it's asked because I would like to re-learn and find out if those using it do so for the reasons I perceive PostgreSQL to be "superior" based on talk among peers. The basic "MySQL is faster but lighter on features" is what I recall hearing; also that PostgreSQL is "multi-paradigm" in that you can run document/NoSQL tables, ETL, and data warehousing all in one product. Then again, maybe MySQL supports JSON tables now as well and I just haven't noticed since my head has been in the AWS cloud and tied to Dynamo for the last three years.
ericbarrettmikece2 years ago
MySQL 8.0 has JSON column support: https://dev.mysql.com/doc/refman/8.0/en/json.html
evaneliasericbarrett2 years ago
MySQL 5.7 does too, it's a fairly mature feature at this point :)

Meanwhile MariaDB lacks a distinct JSON type, but instead provides a lot of functions for manipulating JSON stored in textual types.

postdbmikece2 years ago
I started with mysql in late 90's, switched to postgre and would prefer that instead of MySQL if it is my decision. MySQL is faster than postgres is a myth. Please see this https://arctype.com/blog/performance-difference-between-postgresql-and-mysql/

One thing mysql is better than postgres is that mysql can fast replicate across different data centers. See this blog: https://www.uber.com/en-DK/blog/postgres-to-mysql-migration/

However what Uber is doing may not be what you doing. So do some research and hire qualified db admin or devop people to make the informed choice.

draugadrotten2 years ago
Euphorbiumdraugadrotten2 years ago
It is only good if you do few writes, with at least seconds in between.
warmwafflesEuphorbium2 years ago
Wrong. SQLite can handle concurrent writes using a WAL. You do not have to wait any amount of time between writes. Once the write is done, it is done.
daneel_wEuphorbium2 years ago
Did you evaluate SQLite on a floppy drive or something?
GlenchEuphorbium2 years ago
Incorrect. Here are some actual numbers: https://blog.wesleyac.com/posts/consider-sqlite

> Writing 512 byte blobs as separate transactions, in WAL mode with synchronous=normal3, temp_store=memory, and mmap enabled, I got 13.78μs per write, or ~72,568 writes per second.

EuphorbiumGlench2 years ago
Go on, trust the metrics. It locked up every time I had to do many writes without delay.
GlenchEuphorbium2 years ago
Not sure what your experience is but I'm running SQLite in my production web app and it's just fine :)
markuswEuphorbium2 years ago
Then you did something wrong. SQLite has phenomenal performance for many workloads: https://www.golang.dk/articles/benchmarking-sqlite-performance-in-go
Glenchdraugadrotten2 years ago
It's a little off-topic, but agreed! I use SQLite on production. Here are more resources talking about the trade-offs:

- https://blog.wesleyac.com/posts/consider-sqlite

- https://fly.io/blog/all-in-on-sqlite-litestream/

euroderfdraugadrotten2 years ago
SQLite if you want to skip all the complicated user access stuff. Both PostgreSQL and (ESP'LY) MySQL can mire you in nitpicky details. My 0.02€.
serpix2 years ago
DDL ends a transaction in MySQL. In my opinion, this is a showstopper and deal breaker. Good luck failing a migration in production.
jonatronserpix2 years ago
Do people still use pt-online-schema-change for production schema changes? I remember using it a long time ago.
ericbarrettjonatron2 years ago
Gh-ost is the new hotness. Simple to use and lots of great features: https://github.com/github/gh-ost
brianwawokjonatron2 years ago
Django just does it, run those migrations all the way through dev -> test -> prod.

Look ma, no DBA required.

daneel_wjonatron2 years ago
Unfortunately yes, but as always only for the type of time-consuming DDL changes that incur a write block in a situation where you just can't have the production environment stall or wait that long. The past 5-10 years have seen several improvements allowing some types of schema changes to be done "online" without blocking writes.
tonyarklesserpix2 years ago
A little history, I used:

- MySQL from about 1999-2007

- Postgres from 2006-present

- Oracle at a job 4 years ago

One of the things that absolutely made my brain explode is that Oracle, too, does not support DDL transactions. I discovered this during a pretty funny code review.

“Why are you wrapping this DDL in a transaction? That’s pointless.”

“Wait… Oracle doesn’t support DDL transactions?”

“What do you mean ‘DDL transactions’? That’s… should you be working on DB code at all?”

“Wait… you mean if a migration in Oracle fails half way through it just… leaves things half broken? Let me show you what this would look like in Postgres”

“WHAT THATS AMAZING!!!”

golergkatonyarkles2 years ago
Your reviewer immediately made a suggestion that you're just unqualified? Well, I shouldn't be so surprised, it's a company that chose to use Oracle after all...
tonyarklesgolergka2 years ago
Ahhhh, it was much more polite than that and it was a junior-ish developer. I was new to the team and had already made it clear that while I had some database experience I had zero Oracle experience. I don't fault them too much; when someone suggests something that is completely outside of my mental model of how things work, it takes me a bit of reminding that I need to dig a little deeper to figure out what's going on and not just start jumping to conclusions about competence.

It just happened to be a great opportunity for both of us to expand our mental model of how different systems work. If you imagine someone whose mental model is "DML commands can be transactional, DDL modifies the database structure directly" then it's pretty reasonable for them to go "Uh... this guy doesn't seem to know the difference between modifying data and modifying tables..."

DarthNebo2 years ago
Plugins, UTF-8.....case closed. Where MySQL could be used SQLite could be a better option.
remus2 years ago
Depends what you mean by better. They're both mature databases with performance and characteristics that will suit a lot of use cases, so one is not clearly better than the other.

Having said that, my preference is for postgresql these days. It feels more ergonomic for the (fairly boring, normal database kinda stuff) things I do.

VLM2 years ago
The philosophy of postgresql was to do the best possible job the rightest way. Not necessarily the easiest to use nor fastest.

The philosophy of mysql was to do the best it can as fast as it can for as many users as it can. Not necessarily the 'right' or 'best' way.

Its not like either philosophy is formally documented or mandatory but "generally" matches up pretty closely to real world behavior.

The biggest problem you'll have evaluating them is both products are incredibly old and have changed over the decades so you'll find plenty of web search results DEMANDING that mysql can't do transactions, for example, because that was written 25 years ago and transactions were pretty much a solved problem some decades ago on mysql. Similar issues exist for tooling, library support, clustering, I wouldn't trust a many years old blog post for either product LOL.

idoubtitVLM2 years ago
I also believe these were the original philosophies, since MySQL emerged from practical issues and Postgresql has more academic origins. But nowadays, I'm not sure MySQL is generally faster. In my experience, the main differences are that many simple things are simpler with MySQL, but Postgres is much more rigorous and complete.

For instance, a major upgrade of a MySQL server is usually painless (Debian even upgrades automatically). My last upgrade from Pg12 to 14 was not that easy. Replication has been easier with MySQL, though I've heard Pg has recently matured on this side. The last time I used a Pg DB that was queried by a web app, PgPool was installed because Pg could not handle many concurrent connections, but I've also heard this may not be required with recent Pg.

Client side, in many cases, a "poor man's search" with LIKE is enough or even suitable. With MySQL, I can declare `description TEXT collate utf8mb4_general_ci`, and then `LIKE '%de toutes façons%'` will match `De toutes facons`. Obtaining the same result with Postgres requires much more work.

Now for the bad side of MySQL: it has so many footguns, like the various compatibility modes (silent truncation when inserting is still the default behavior, I think), or the transactions breaking when a DDL starts. Support of complex types (arrays, JSON, specialized types like ISBN...) is also years ahead in Postgresql, and so are many advanced features.

porkeridoubtit2 years ago
> silent truncation when inserting is still the default behavior, I think

Hasn't been the default since MySQL 5.6. MySQL 5.7 was released in 2015 and some distros may have altered the defaults, but I doubt it.

evaneliasidoubtit2 years ago
> silent truncation when inserting is still the default behavior, I think

MySQL 5.7, released in Oct 2015, changed the default to enable strict sql_mode. All prior versions have hit end-of-life for support years ago, so there is no modern version of MySQL with this silent truncation behavior.

That said, there's still a major problem, but it's not MySQL's fault; rather, it's Amazon and their nonstandard defaults. AWS's managed database offerings inexplicably change the default sql_mode to disable strict-mode, both for regular RDS as well as Aurora. This is a huge problem, and it also definitely perpetuates this "MySQL still silently truncates!" perception.

bluGillVLM2 years ago
25 years ago the people behind MySQL said that you didn't need transactions, quit asking for them. They didn't understand any of ACID really. Since then the project has got the important features, but I still have a bad taste in my mouth from the at the time leaders.

25 years ago postgresql was clearly a slow DB, but they didn't claim you didn't need performance, just that you had to measure first to ensure you really did (nobody did that measure).

Things have changed. However my career ended up not needing much of a DB ever and so I've never had any need to see how much. I just like to watch the fights from the sidelines.

flashgordonVLM2 years ago
What I liked about postgres (even as long time ago as 5-10 years ago) was that it tried to do things in an explainable and standards compliant way even if it mostly did not beat MySQL on latency. This never bothered me because my view had been to scale horizontally when needed and a db that led itself to better control plane development was more important than aggregate single node performance. Just my 2c.
sbalamurugan2 years ago
PostGIS has no parallel in MySQL/MariaDB world.
jitl2 years ago
My impression from working on an app with a Postgres cluster of 128-ish instances is that Postgres is excellent when your largest table fits on 1 instance, and kinda lacking when you need >1 instance for your largest table.

MySQL has very mature cluster managers like Vitess / Planetscale. Citus for Postgres was bought by Microsoft and doesn’t seem as popular as Vitess.

KronisLV2 years ago
The ecosystem: seems like PostgreSQL is a bit more popular, especially with offerings like PostGIS, PostGraphile, PostgREST and the many other projects you see occasionally. That said, it's nice to see MySQL/MariaDB as mostly compatible alternatives to one another, so that you don't technically "keep all of your eggs in a single basket" and there's stuff like Percona out there.

The functionality: from what I've heard and seen, PostgreSQL is commonly touted as the solution with more correctness in regards to SQL, but also has great support even for things like JSON, as well as PL/pgSQL is pretty nice to use, in addition to custom types and a whole lot of other useful stuff. That said, not every system needs than and the MySQL/MariaDB approach of having databases instead of databases and schemas is a bit simpler. That said, transactional DDL in PostgreSQL is a big plus, as someone else mentioned.

The performance: it probably depends on what you're doing. Benchmarking both for your particular workload is probably the way to go, though historically using pgBouncer has been a good idea for PostgreSQL. In the current day and age, both should be decent choices, especially with partitioning and clustering.

The tooling: here I'll come out and say that I like MySQL Workbench more than pgAdmin, both because it feels more like a native piece of software, but also because they have really great ER diagram functionality that when coupled with forwards/reverse engineering and schema sync can allow you to plan and alter your schema in a visual way: https://dev.mysql.com/doc/workbench/en/wb-design-engineering.html (and it gives you SQL at the end of the day, so you can further alter things as necessary, really nice for both exploring the schema, so you don't need someting like DBVis as much, as well as for quick edits). That said, pgAdmin has built in support for displaying geospatial data (e.g. PostGIS) which is pretty great since you don't need something like QGIS as much.

The developer experience: both are good. Both have containers that are easy to run, both are easy to connect to and work with (though MySQL drivers sometimes aren't shipped with other software out of the box), both are widely supported by a variety of tools (DataGrip is one of the better ones out there as well), and they also have managed cloud offerings from many vendors, if needed. Both will scale up and down pretty decently, though PostgreSQL seems to use a bit less memory in containers and has faster startup when I last checked, though that's a minor detail. Either will be easier than trying to find a trustworthy Oracle XE container image, or to build your own with some DB setup and initialization scripts that actually work and don't throw errors about the DB being closed when the container restarts.

At the end of the day, it's hard to go wrong with either of them, much like even SQLite can be a good choice for some workloads. Right now I'm building a personal project with MariaDB, but Keycloak more or less demands PostgreSQL, so running that as well. Though in regards to MariaDB one could probably mention the whole SPAC situation that went down a bit ago.

lowercasedKronisLV2 years ago
I appreciate having basic GIS/spatial stuff built in to MySQL8, vs having to do extension stuff in pg for postgis.
voganmother422 years ago
It depends: the projects make different trade offs and support different points of extension.

I find it useful to remember that MySQL has various storage engines (that make different trade offs, and may themselves have different options such as various row formats), when using for instance the default of InnoDB knowing that it uses index organized (meaning stores the data in a btree as opposed to heap) tables can influence what workloads perform well (is the primary key used heavily/load bearing).

Postgres often excels when queries are more complex, for instance being able to use partial indexes (include a subset of a table, a “where” clause in the index) can be very powerful.

One other point of differentiation that has been useful for me to consider for various projects is also the programming within the database(stored programs): specifically doing things like writing stored procedures/functions, Postgres has many high quality procedural languages(some are shipped in core like PL/Perl, or others like PL/R that are external) that can useful to express these with.

voganmother42voganmother422 years ago
As far as making/performing modifications to the schema/data, as others have stated, transactional ddl is a huge feature for Postgres and makes migrations much more straightforward in my experience.
barrkelvoganmother422 years ago
Any DDL worthy of the name (table rewrite needed) in MySQL needs to be done with pt-online-schema-change or gh-ost. With DDL, you want to e.g. have it tune the I/O load of the migration so it doesn't interfere too much with production load. In practice, pt-osc / gh-ost means you have a kind of transaction, because the rewritten table doesn't take the place of the old one until the rename at the end. That's essentially an atomic commit.
anonzzziesvoganmother422 years ago
I think this is a good answer. Our case is weird; millions of tables with billions (not sure how many in total even) of rows and no joins. Mysql handles this far better in our testing.
Thiezanonzzzies2 years ago
May I ask what kind of data you are storing that you need millions of tables? If you have such varied data, what are the tradeoffs compared to only a few tables with json columns?
thinkxanonzzzies2 years ago
we’d love to test this scenario with the new OrioleDB storage engine for Postgres.
rich_sasha2 years ago
How would you choose one vs the other?

Most answers allude to either: they are kinda same and full-featured, or different philosophies. Neither really hints at how you pick one.

ZiiS2 years ago
PostgreSQL has a lot of power. Need your table trigger to make a HTTP request, need a stored procedure written in Rust/Java/Javascript/Perl/Python (to name a few _common_ options ;-), want to index JSON fields, or GIS, or NLP vectors. Need everything be strict and correct ACID, referenced integrity, row level security. Need point in time recovery with a <30s RPO.

The fact it overlaps with MySQL if your use case is "backend for a common ORM" barely makes them comparable.

santa_boyZiiS2 years ago
Row level security is the main reason I choose Postgres. Is there any way to get RLS implemented in MySQL?
forinti2 years ago
I first used MySQL when it didn't even check column constraints, so that left a bad impression. It has improved, but I'd much rather use Postgresql, and a few features which I really like are:

- Streaming Replication (keep a secondary instance for contingency or moving heavy queries away from the main database);

- Procedures in various languages;

- Foreign Data Wrappers for integration with other databases (much sturdier and Oracle's Heterogeneous Services and there's even an FDW for Elasticsearch);

- Backup/restore is quite easy as is upgrading.

The only thing I miss are packages like Oracle's.

So, unless I have no other option, I'll choose Postgresql.

daneel_wforinti2 years ago
> "- Streaming Replication (keep a secondary instance for contingency or moving heavy queries away from the main database);"

MariaDB/MySQL's replication is also streamed. It's a continuous stream of replication data over an unbroken connection, for both statement-based and row-based methodologies. The only difference is that Postgres has since long been able to break transactions up into smaller pieces and apply them piecemeal, which is something that didn't show up in MariaDB/MySQL camp until a few years ago when introduced in Galera.

ivolimmen2 years ago
As a software engineer that worked with a lot of different databases in the last 20 years I have come to believe that PostgreSQL is even better than Oracle (this will certainly spark some fires). I have had shit with a lot of databases but never with PostgreSQL. MySQL has some really nasty documented and undocumented "features" so I rather avoid that database. It has it's place in the universe but it's in on a different plane of existence than mine
tootieivolimmen2 years ago
When I started in the biz around 2000 any serious tech business running Oracle or maybe SQL Server if they were MS stack. Oracle was miles ahead of everyone else. In the last ten years I don't think I've seen a single org that still runs Oracle. It's legacy at this point. SQL server has surprisingly hung on, but everyone else is using something open source or something cloud.
KronisLVtootie2 years ago
> In the last ten years I don't think I've seen a single org that still runs Oracle.

Over here in Latvia, I've seen orgs that run Oracle and are still starting new projects in it, because that's what they know best and presumably have the licensing all figured out. Admittedly, their automatic indexing and a few other features in the recent releases are useful.

However, in practice it's easier to work with PostgreSQL and MySQL/MariaDB, because of how easy it is to launch either, be it locally or on a server, thanks to the availability of container images. Even if you can launch Oracle XE or something like it in containers, provided that you get your hands on such containers, there will still be challenges in regards to startup time, resource usage and various other quirks.

But hey, for people that don't need to use that approach, I guess nobody ever got fired for picking IBM, Microsoft or even Oracle. Personally I'd just go for PostgreSQL/MySQL/MariaDB in most cases because of the ease of use.

juancnivolimmen2 years ago
I agree, since version 9-ish PostgreSQL is likely a better default choice of database than any other competitor (commercial or otherwise), unless you have some weird workload.
alentredivolimmen2 years ago
Some 5 years ago I worked with a DBA, Oracle DBA more precisely, who told me that PostgreSQL is generally better than Oracle. At the same time, he wouldn't switch to PostgreSQL because... drumroll... he wouldn't have a job as a PostgreSQL DBA, because it is just simpler. Oracle DB, according to him, was more complex and harder to manage, and Oracle built entire offering around it: training, certifications, etc., which sustains the ecosystem.
dhd4152 years ago
I've operated MySQL in production at different scales ranging up to _very_ large and as such, I place a lot of emphasis on how well that can be done. I posted some of this a couple years ago, but I would describe the production operation of MySQL as a minefield. I maintained a list of the mines I stepped on and here are the two I encountered with the biggest blast radius:

* If you have pre-5.6.4 date/time columns in your table and perform any kind of ALTER TABLE statement on that table (even one that doesn't involve the date/time columns), it will TAKE A TABLE LOCK AND REWRITE YOUR ENTIRE TABLE to upgrade to the new date/time types. In other words, your carefully-crafted online DDL statement will become fully offline and blocking for the entirety of the operation. To add insult to injury, the full table upgrade was UNAVOIDABLE until 5.6.24 when an option (still defaulted to off!) was added to decline the automatic upgrade of date/time columns. If you couldn't upgrade to 5.6.24, you had two choices with any table with pre-5.6.4 types: make no DDL changes of any kind to it or accept downtime while the full table rewrite was performed. To be as fair as possible, this is documented in the MySQL docs, but it is mind-blowing to me that any database team would release this behavior into production. In other words, in what world is the upgrade of date/time types to add a bit more fractional precision so important that all online DDL operations on that table will be silently, automatically, and unavoidably converted to offline operations in order to perform the upgrade? To me, this is indicative of the same mindset that released MySQL for so many years with the unsafe and silent downgrading of data as the default mode of operation.

* Dropping a table takes a global lock that prevents the execution of ANY QUERY until the underlying files for the table are removed from the filesystem. Under many circumstances, this would go unnoticed, but I experienced a 7-minute production outage when I dropped a 700GB table that was no longer used. Apparently, this delay is due to the time it takes the underlying Linux filesystems to delete the large table file. This was an RDS instance, so I had no visibility into the filesystem used and it was probably exacerbated by the EBS backing for the RDS instance, but still, what database takes out a GLOBAL LOCK TO DROP A TABLE? After the incident, I googled for and found this description of the problem (https://www.percona.com/blog/2009/06/16/slow-drop-table/) which isn't well-documented. It's almost as if you have to anticipate every possible way in which MySQL could screw you and then google for it if you want to avoid production downtime.

There are others, too, but to this day, those two still raise my blood pressure when I think about them. In addition to MySQL, I've pushed SQL Server and PostgreSQL pretty hard in production environments and never encountered gotchas like that. Unlike MySQL, those teams appear to understand the priorities of people who run production databases and they make it very clear when there are big and potentially disrupting changes and they don't make those changes obligatory, automatic, and silent as MySQL did. IMO, MySQL has its place for very specific workloads, but if you don't have deep enough knowledge of DB engines and your workload to know that yours is one of those specific workloads, you should default to PostgreSQL.

barrkel2 years ago
If you're going to scale up to 10s of millions of inserts a day and 100s of billions of rows, I recommend MySQL as a step on a journey towards Vitess. Otherwise, I recommend Postgres.

For anything other than a data-oriented startup, I'd pick Postgres all day long.

The troubles each database gives you do have different flavours. Postgres's query planning is mercurial and sometimes quixotic. It can be tedious forcing it to do the right thing, and it does sometimes switch strategy resulting in queries taking 100x or more longer than they used to, just because some stats changed. MySQL's query planner is more stupid, more predictable, and less likely to give you this kind of pain. OTOH you have to be more careful writing queries and you sometimes end up adding hints or encoding a query structure which locks in a query plan which might not be the best one long term. But it usually degrades gradually rather than suddenly. I'm not sure which is worse, though, because gradual degradation doesn't demand fixing.

hellcow2 years ago
Having managed both in production at decent scale for years, I would not choose MySQL for any projects going forward. Postgres has been more dependable in its performance, more powerful in its features. There’s no “pro” to choosing MySQL that I’m aware of in 2023.
azurelakehellcow2 years ago
MySQL has DDL replication.
mekoka2 years ago
Started with MySQL in the mid 2000s. It was ok for what I needed. Then switched to Postgres early 2010s. Found it richer in tooling, extensions, and options. I don't know how well MySQL supports them today, but here are just a few features that I've found compelling in postgres, while using it in various projects: JSON, search, schema, PLs, roles. The list of what postgres can do seems endless. I was once hired to contribute on a Python/postgres project. After I accepted, it was revealed to me that the Python part was inside postgres. An architectural decision that hinted something about the mind that made it, but technically, it worked flawlessly.
letmeinhere2 years ago
I've used both for similar amounts of time at this point. I know a _lot_ about the implementations details of MySQL, and a helluva lot less about PostgreSQL, and that's because I hit _way_ more landmines with MySQL. It's just so easy to use it the wrong way and get saddled with ugly problems for months or years.

To be fair, my PostgreSQL experience is more recent, and both products have matured a bunch, but I've also seen plenty of the bad old MySQL problems—like MyISAM tables and 3-byte unicode—in just the last couple of years.

pjdkochletmeinhere2 years ago
/me cries in range locks and mutexes
davidthewatson2 years ago
Echoing VLM:

No, not really. All else being equal:

I'd say that today, it's much muddier than it was maybe ten years ago, before the data science explosion and cloud explosion.

That is, turn-key cloud Postgres is not Y2K Postgres, and MySQL post-Maria is not Y2K MySQL. In the former, administration of Postgres has gotten much easier since Heroku got involved and Andy Pavlo started talking about self-driving databases. In the latter, MySQL can do text search well enough that I sat pair programming this with a young engineer around 2016-2017 when I'd have argued prior that you were wasting your time.

In the end, it probably comes down to how you rank your requirements:

usability, performance, suitability for a particular purpose.

Things like postgis and pgrouting may have little equal, so like others, those can be huge differentiators if you need them and see the advantage in db backend computation without invoking the N+1 query problem in aggregate.

I'm biased since I used MySQL before the inefficient backend was deprecated and I used PostgreSQL to do all manner of experimental EAV implementations using bjson including various forms of tuples from Rich Hickey.

All hail Stonebraker.

tapoxi2 years ago
I think the case of "it depends" mattered more in the days of shared webhosts and basic PHP sites when the relatively lightweight MySQL made more sense. These days, not only is PostgreSQL designed better and has more features (our key usecase was RLS) but its governance is better. You're not tied to the monster called Oracle and Postgres has a very healthy development community that isn't overwhelmingly dominated by any one entity.

I recommend https://pgloader.io if you're currently on MySQL and want to switch your existing app.

taylodl2 years ago
Here's my experience with PostgreSQL:

Coming from Oracle, I've been able to use PostgreSQL pretty much as though it were Oracle, including Oracle's optimistic locking strategy. Some of the SQL functions are a little different, but it seems the underlying philosophy is the same. In other words, PostgreSQL seems like Oracle, just a little different.

MySQL has always felt completely foreign to me. Depending on your background and familiarity with other RDBMS's though, you may be right at home with it. Wouldn't be a bit surprised if someone had a similar anecdote to mine replacing PostgreSQL with MySQL and Oracle with SQL Server.

If you don't have a personal preference for either system then do some benchmarking on the platform you'll be running them on and the loads you're expecting and see if either particularly excels over the other for what you're intending to do.

sam_lowry_taylodl2 years ago
Hah, I always felt the same about Postgres and I still prefer the MySQL cli.

The concept of schema is also weird for MySQL users, although pervasive in some kind of Michael Stonebreaker' worldview.

autrasam_lowry_2 years ago
I can understand it being weird, but when you get used to it, it's really really handy, for instance to manage user rights. I've read somewhere that actually, pg schemas compares more with mysql database, postgresql databases being an additional level wrapping it.
sam_lowry_autra2 years ago
Exactly, every sysadmin wants to manage user rights in his own ingenious way through schemas.
ksec2 years ago
It takes an account from 2020 with little Karma to dare and ask the MySQL question on HN. So I am taking this opportunity again, anyone knows anything about next major version of MySQL?

If anything Postgres has a much clearer development model. Lots of features in the pipeline.

exabrial2 years ago
We benchmarked both. There are some edge cases where postgres is faster, but MySQL used significantly less disk space in those scenarios. Postgres has some esoteric functions not available in MySql, but enh. You probably shouldn’t use those in your app to avoid sharp edges that could be deprecated in the future. Stick to the SQL standard.

All other things being equal they actually perform about the same for our use case.

Then let’s talk replication… MySql GTID/Binlog replication is stupid easy to setup. And _it just works _ and recovers really easy. You can make hundreds of near-real time replicas.

Finally the tooling. I find MySql’s tooling far easier to use than Postgres. Nearly everything is SQL. You have to memorize configuration in both cases so take this last point is an opinion.

Both are a good choice. I think would save if you need absolute single-server performance, probably Postgres is the better option, but understand you're only getting marginal gains. If you want to make hundreds of read-only replicas or use their semi-sync HA plugin, MySQL is probably way to go, but understand that multi-master writes is still not really a thing.

frodowtf2 years ago
Most important difference is probably clustering. We had to do a lot of PK changes when our mysql database received more rows.

Our access patterns are mostly analytical. A lot of scanning and aggregating has to be done. Nobody cared about clustering before so we had to change a lot of table schemas to make our IO efficient.

I only heard from colleagues that such issues are easier to handle in postgres because it is more flexible. But for mysql, you have to understand that primary key and clustering key are always the same thing.

(of course this depends on the storage engine, but it's the behavior of the standard engine innodb)

juancn2 years ago
If you need to ask, probably yes.

PostgreSQL is a great database, it's hard to go wrong with it.

I mean, unless there's something you know for sure that PostgreSQL won't do for you, it's a great default choice (even over commercial options such as Oracle or SQLServer).

KevinMS2 years ago
MySQL is simpler to use if you just need CRUD. I've used to 20+ years on prod with zero issues, except having to turn on unicode at some point along the way.
gremlinsinc2 years ago
Why not just go with MongoDB. MongoDB is webscale.

(Meanwhile a senior dev in iowa is picking up cow shit, and not regretting his career change).

obarthelemy2 years ago
Any "is better than" statement is frought and pointless. You've got to specify better "at what". My 1st car was a Citroën 2CV, with a 9HP engine, zero noise isolation, barely able to keep moving forward on inclines over 10%, top speed 70 km/h, not quite rainproof... was it a better car than a Ferrari ? Yes ! because I was able to afford it and keep it maintained, and because it could go on barely-there muddy dirt roads that would have stopped and ruined a Ferrari.
4Ut9MYdT2 years ago
Here's two things to consider about people leaving college; - These days people are educated with standards in mind. PostgreSQL is the tool of choice in college teaching those standards. - People are joining the industry as developers, they are more concerned about the DEV tools than they are of the backend. PostgreSQL can keep up with those screwy ORMS no matter how obscure the SQL statements turn out to be. All these people want is to create their project and then move on to the next job. They don't want to deal with the database.
pg_12342 years ago
Yes ... but,

if you are already using MySQL, Postgres isn't sufficiently better that it justifies the cost of migrating.

nobozo2 years ago
I was part of the Postgres Research Group at UC Berkeley from 1991 to ~1995, working directly for Mike Stonebraker. To be honest, I didn't do any of the research work behind it, although I did port Postgres to Windows NT during this time.

Postgres back then was completely different then what it is now. It was mainly used for PhD and Master's student to hack on for their research. It was a mess internally and was hardly usable for production. I don't think that MySQL went through this style of development.

What eventually happened is that around 1995 SQL was added to Postgres and a bunch of non-Berkeley people started hacking on it. They did a fantastic job, and deserve all the credit for making it what it is today (Stonebraker has publically said this).

aprdm2 years ago
Better... For what ?
profwalkstr2 years ago
MySQL has a lot of gotchas, WTF moments, things that don't make sense and seem overly and needlessly complicated compared to PostgreSQL.

- Take data types, for example. Seems something simple, right? You can grok all data types in PostgreSQL in an afternoon. Want to grok MySQL data types? You're going to waste an entire week to grok them. There are multiple data types that seem to do the same thing with very small differences and a lot of incompatibilities... and a lot of exceptional behavior (see point below). MySQL data types are LESS powerful, less flexible and 10x more complicated than PostgreSQL data types.

- Enums and sets need to have their possible values defined on a table by table basis. What a nightmare to keep them in sync. PostgreSQL allows you to this once for the whole DB.

- Open MySQL manual and it's full of passages like: feature A does "blah" if X mode is enabled, but does "bar" if mode Y is enabled and will corrupt data if storage engine is used with mode Z enabled.

- SQL strict mode can be disabled by the client. You cannot enforce that your server will work only with SQL strict mode on because a client can simply open a connection with SQL strict mode off. There's no way to enforce that clients will only use SQL strict mode. You have to trust your clients. (This other comment explains strict mode: https://www.reddit.com/r/PostgreSQL/comments/xblooo/comment/io3iy2c/?utm_source=reddit&utm_medium=web2x&context=3)

- According to the manual, if client A does stuff with strict mode on and client B does stuff with strict mode off, data corruption will occur.

- A lot of surprising and unexpected behavior, even with SQL Strict mode on

- String comparison is case insensitive. There are workarounds but they are still workarounds.

- A lot of behavior is dependent on server configuration. So switching between servers you can't expect them to behave the same.

- Behavior not standardized among versions and installs

- Changing the server SQL mode after creating and inserting data into partitioned tables can cause major changes in the behavior of such tables, and could lead to loss or corruption of data

- Useless error messages (see https://www.reddit.com/r/PostgreSQL/comments/xblooo/comment/io3pha2/?utm_source=reddit&utm_medium=web2x&context=3)

- MySQL, at the same time it is less powerful and flexible, has a lot more levers and knobs than PostgreSQL. PostgreSQL, while more powerful and flexible, conceptually is a lot simpler than MySQL.

- PostgreSQL stays out of your way; with MySQL you’re constantly fighting it

PostgreSQL -> simplicity

MySQL -> complexity

deverman2 years ago
I am managing a project where we chose Postgres in Azure to replace our on premises Oracle.