The ANSI standard was done in 1986. Since then there have been several ISO standards updates and AFAIK MySQL has lagged significantly.
The ANSI standard was done in 1986. Since then there have been several ISO standards updates and AFAIK MySQL has lagged significantly.
I am not sure I can agree with this one as well, but at least postgres does normally not use row level locking anymore (in 10 years?) because of MVCC.
the implementation makes a large difference in the quality of the software and it's use.. objectively you could compare a ferarri and a honda this way and they would look comparatively similar.
postgresql isn't as feature complete as oracle.. but comparing it to mysql now is almost a moot point.
if it's not going to kill your company- or you're making something new from scratch- you're really doing yourself a disservice by not using postgresql.
there's more than a few reasons for this but a few that come to mind is:
* Sane Connector libraries (especially in C++ where mysql is inconsistent, lock prone and buggy)
* SQL standards compliance, not my pgsqlisms (there are, but they have namespaces which are obvious)
* MySQL has a habit of being massively inconsistent or not running in an expected way (input validation is very lacking, along with things like, changing column widths which irrevocably alters data)[0]
I mean, personally I'd rather not have a relational database which eats my data.
If it were about schema change, I don't think there's any way to talk the result into a tie, that's really one of the things MySQL is absolutely terrible at.
It is rare that you are linking the database code with your code. Therefore, the difference between BSD and GPL code will rarely, if ever, matter for a database user.
If you think you'll have to modify the code of the database, the license would matter.
The license issue is FUD.
The main motivation was that MySQL doesn't support a native UUID datatype (!!!) but, having switched, it seems that Postgres is much better thought-out and designed. For example, it supports a wider range of types and I find it less confusing to choose the right type compare to MySQL. It appears to support unicode better. Its system of privileges and roles seems more logical. Lots of small things like this add-up.
I'd definitely recommend checking it out.
It supports unsigned ints which Postgres doesn't and it has bitcount built in. I am aware these are probably rarely used, but I wanted them for what I was trying to do (compare large combinations of DNA sequence). I tried implementing it in Python / Numpy and Java and PG but MySQL was still the fastest. (I am fairly sure if I spent more time, I could improve any of the solutions, but for a similar level of effort MySQL (plus Python) won.
Also even with CHECK constraints being available, the question of unsigned ints seems frequent enough that there's at least one extension implementing unsigned types: https://github.com/petere/pguint (no idea about the quality or production-readiness).
Bitcount / popcount / hamming distance as a CPU operation will have saved a lot of CPU cycles over the PG method I tried.
You sound like you know Postgres fairly well. How feasible would it be to write a CPU based popcount extension?
I consider PostgreSQL the better database most days, but license FUD is counter-productive.
Is there anything of similar quality to postgres nowadays? I user pgadmin years ago, but it was quite crude in comparison even to Query Analyzer in sql 2000.
Another important thing for me is table partitioning. Both databases support it, but I'd love a comparison. MySQL seemingly covered a lot of ground in this regard.
http://peter.eisentraut.org/blog/2015/03/03/the-history-of-replication-in-postgresql/
I thankfully no longer have to support Postgres in production, but I can still vividly recall that terrible sense of dread I'd get in my stomach whenever I'd receive a replication alert for a Postgres database.
or were you referring to some other issue with failed replication? something that mysql handles better/differently?