2016: https://news.ycombinator.com/item?id=12166585
2018: https://news.ycombinator.com/item?id=17280239
Community responses:
2016: https://news.ycombinator.com/item?id=12166585
2018: https://news.ycombinator.com/item?id=17280239
Community responses:
I was the author of all of the above, including the Postgres 14 work you mentioned (though Anastasia Lubennikova was the primary author of index deduplication). To me it feels like one very large project -- the effects are cumulative, and each major Postgres version had B-Tree work that built on the last release in one way or another.
Why Uber Engineering Switched from Postgres to MySQL (2016) - https://news.ycombinator.com/item?id=17280239 - June 2018 (47 comments)
Re: Why Uber Engineering Switched from Postgres to MySQL - https://news.ycombinator.com/item?id=12179222 - July 2016 (67 comments)
Why Uber Engineering Switched from Postgres to MySQL - https://news.ycombinator.com/item?id=12166585 - July 2016 (294 comments)
Thoughts on Uber’s List of Postgres Limitations - https://news.ycombinator.com/item?id=12216680 - Aug 2016 (103 comments)
A PostgreSQL response to Uber [pdf] - https://news.ycombinator.com/item?id=14222721 - April 2017 (82 comments)
Why we lost Uber as a user - https://news.ycombinator.com/item?id=12201353 - Aug 2016 (285 comments)
Uber's Move Away from PostgreSQL - https://news.ycombinator.com/item?id=12223216 - Aug 2016 (15 comments)
"Highly advanced" databases give both options, but AFAIK, MySQL/PGSQL will likely not offer this, at least for a very long time, since it requires radical changes.
On the other hand, mixing storage engines in a single db instance has operational downsides (especially re: crash-safe replication). And InnoDB is by far the dominant storage engine, and is probably unlikely to offer nonclustered indexing, so from that perspective I agree with your point.
Apparently MariaDB 10.3+ has this solution implemented, which is cool, never knew that before. I don't think there's anything equivalent in MySQL.
I made a bit of money freelancing on "my database for my LAMP stack app is corrupt!" issues by a) demonstrating that InnoDB wouldn't slow down their webapp in any measurable form and then b) trying to save and normalise as much data as possible.
Uber has not switched from Postgres used as RDBMS to MySQL used as RDBMS, they switched from Postgres used as RDBMS to MySQL used as key-value storage layer of homegrown sharded non-relational database.
This has pretty much no bearing on anyone using Postgres or MySQL in reasonable way.
Well known MySQL uses such as Facebook TAO and this Uber Schemaless are typically abstractions built on top of MySQL, which means the schemas are pretty much static, and they don't feel the schema migration pain.
For a typical RoR startup that relies on a RDBMS, please, stay away from MySQL.
[1] Yes, I know about the INSTANT ADD COLUMN patch from Tencent Games that landed in MySQL 8.0, and which has had major bug fixes in at least 8.0.14 and 8.0.20.
[2] A side effect is that MySQL now has a thriving ecosystem of schema migration tools (pt-osc, lhm, gh-ost), while Postgres has none, and there are situations where there is indeed no choice but to rewrite the table, e.g. changing a column type from int to bigint.
* Facebook had extremely frequent schema changes, and powerful declarative schema management automation to support this
* The TAO (or more correctly "UDB") use-case supported using many separate tables, not one giant generic key/value table as people seem to assume
* The non-UDB MySQL use-cases at Facebook, in combination, are still larger than the vast, vast majority of all other companies' databases. These non-UDB databases use a wide range of MySQL's functionality. The frequent claims that "Facebook used MySQL just as a dumb K/V store" are absolutely incorrect and have never been correct.
Linkbench is unmaintained and does not attempt to mirror the entirety of UDB, just its access patterns: point lookups by PK, and range scans over a secondary index. A fixed access pattern is not the same thing as having no schema changes.
Even putting column changes aside, the entirety of UDB was migrated from InnoDB to MyRocks in 2017, which is essentially a schema change across every single UDB table in every single UDB shard.
And besides, as I mentioned already, the non-UDB MySQL use-cases at Facebook are larger than the vast majority of companies' databases -- larger than the next-largest US social network, even. The non-UDB tiers had dozens of schema changes every single day.
As rwultsch correctly mentioned, Facebook's extreme agility with schema changes is directly what inspired me to create https://www.skeema.io, an open source project offering declarative schema change management. It's used by GitHub, Twilio, and a number of other well-known companies.
Please stop making incorrect statements based on things you have no direct experience with.
Until then, sorry, we will keep advocating people to stay away from MySQL (and thus indirectly, skeema), because "long wait and potential incident from schema migration" is just not something that should come up during a sprint planning.
> Even putting column changes aside, the entirety of UDB was migrated from InnoDB to MyRocks in 2017, which is essentially a schema change across every single UDB table in every single UDB shard.
I read about that, and it is definitely an impressive feat, but still, that doesn't answer the question, at all? That's just relying on MySQL native replication that works across different engines.
Yes!
> that would require rewriting the table via pt-osc
Facebook does not use pt-osc; they use fb-osc, originally written in PHP and released in 2010 [1] and later ported to Python in 2017 [2]. The concepts are similar to pt-osc re: core use of triggers, but the fine print has some important differences about how the new table structure is specified, when changes are applied, and how changes are made on replicas.
Anyway, fb-osc was used on UDB, the answer is emphatically yes.
btw I am using the past tense here because I haven't kept up with FB mysql stuff the past few years. I don't even know if UDB is still on mysql at all; it's irrelevant to the discussion because the key point here is that schema changes emphatically did occur on UDB for many years, and your original statement regarding TAO and schema changes was demonstrably false, full stop.
> If so, at what frequency?
As I already said, I do not recall! Why would I remember the exact frequency of a completely and seamlessly automated process at a company I left 6 years ago?
UDB didn't require fb-osc changes nearly as often as non-UDB, if that's what you're asking, by nature of it serializing most (not all!) object fields down to a single column. But there were definitely still cases where actual schema changes were necessary on UDB tables, as I'll say yet again, the schema was not completely uniform across all UDB tables.
What's with this obsession on the frequency, anyway? Why does this matter? Your original statement was "the schemas are pretty much static, and they don't feel the schema migration pain", and this statement was wrong. Stop moving the goalpost.
> Until then, sorry, we will keep advocating people to stay away from MySQL (and thus indirectly, skeema), because "long wait and potential incident from schema migration" is just not something that should come up during a sprint planning.
Until when? Who is "we"? You aren't making sense. First your argument was that Facebook supposedly doesn't make schema changes at all, and now you're seemingly pivoting to bashing MySQL for needing external OSC tools, even though your original comment directly acknowledged that PG has cases where lack of these tools is a major problem?
> but still, that doesn't answer the question, at all? That's just relying on MySQL native replication that works across different engines.
What's relying on native replication? Changing a table's storage engine inherently requires rewriting the entire table.
[1] https://www.facebook.com/notes/mysql-at-facebook/online-schema-change-for-mysql/430801045932
[2] https://engineering.fb.com/2017/05/05/production-engineering/onlineschemachange-rebuilt-in-python/
Thank you for answering. I stand corrected then. I also wrongly assumed that it was pt-osc, because that's what get mentioned at your website - "This feature works most easily for pt-online-schema-change"
> UDB didn't require fb-osc changes nearly as often as non-UDB, if that's what you're asking, by nature of it serializing most (not all!) object fields down to a single column.
And thanks here as well for willing to at least slightly conceding your position.
> What's with this obsession on the frequency, anyway? Why does this matter? Your original statement was "the schemas are pretty much static, and they don't feel the schema migration pain", and this statement was wrong. Stop moving the goalpost.
It is a technical discussion, not a competition. Goalpost does get moved. It matters because after reading all the blog posts and bug reports, I have very high respect for Domas, Yoshinori, Mark, and Harrison. And I for one, could not imagine that they would design a critical piece of Facebook infrastructure that would require frequent babysitting.
I believe you would now like to object about the word "babysitting" by claiming that the process is completely and seamlessly automated. The thing is, when a table rewrite is going on due to a schema migration, there's always a risk that the additional write operations would trigger a production incident due to replication lag, which is typically the first bottleneck being hit. The migration to MyRocks likely has made this more seamless by providing more headroom. The experimental write-set replication in MySQL 8.0 might also have improved this, although I don't think Facebook is using 8.0.
> Until when? Who is "we"? You aren't making sense. First your argument was that Facebook supposedly doesn't make schema changes at all, and now you're seemingly pivoting to bashing MySQL for needing external OSC tools, even though your original comment directly acknowledged that PG has cases where lack of these tools is a major problem?
Er, we, as in everyone else except you in this HN thread? You do realize that you are the only one defending MySQL here right? Anyway, PG has exactly one scenario where it needs a migration tool, i.e. changing the column type. This can be easily avoided as long as people are aware of the limitation, e.g. just create the table with bigint as the primary key. So, a migration tool for PG would have been nice, but I don't exactly need one. Make sense?
> What's relying on native replication? Changing a table's storage engine inherently requires rewriting the entire table.
Did you even read about how the migration was done? Firstly, some MyRocks replicas were provisioned, then they started to serve some traffic, and then eventually they get promoted to be the master. With a ton of bug fixes and performance tuning in between, to ensure that there is no regression from InnoDB. I can't take you seriously if you think that the DB engineers would carry out such a risky move of rewriting all tables by changing the storage engine with ALTER TABLE.
I said Skeema was inspired by Facebook's approach to schema change agility. It was not implemented by Facebook. It is not a Facebook project, it does not use any Facebook tech. Facebook does not use Skeema.
> there's always a risk that the additional write operations would trigger a production incident due to replication lag
fb-osc bypasses replication entirely. Read the links I provided previously. The 2010 post was written by Mark.
As I said already, fb-osc was used dozens of times per day across Facebook's mysql fleet. Its design was influenced by some of the very people you're name-dropping. It ran seamlessly as part of a self-service declarative schema change automation system.
I was a former member of the team that was directly on-call for all MySQL incidents at Facebook. I am discussing my direct personal experience here. There were certainly some particular repeat-causes of oncall misery, and plenty of oncall shifts that were 12+ hours of hell. Yet I can't recall a single major incident that was caused by online schema change during my time at Facebook.
> The experimental write-set replication in MySQL 8.0
Nothing "experimental" about that feature. As a consultant I've directly used it to speed up parallel replication at major companies that you've very likely heard of.
> You do realize that you are the only one defending MySQL here right?
That statement is demonstrably false. There are several other commenters defending mysql in this overall thread.
Anyway, I'm in good company: the corporations using MySQL make up several trillion dollars of combined market cap. If you have any s&p500 index funds, you are heavily invested in MySQL's successful use, whether you like it or not.
> have very high respect for Domas, Yoshinori, Mark, and Harrison
Yes, these four are superstars, among others. I don't understand how you say you have very high respect for them, yet you're fine with crapping all over the database technology they all spent a large chunk of their lives working on. All four previously worked for MySQL AB, Sun, and/or Oracle.
> I can't take you seriously if you think that the DB engineers would carry out such a risky move of rewriting all tables by changing the storage engine with ALTER TABLE.
Where did I say anything about doing this migration using ALTER TABLE? You keep responding to things I did not say or even imply!
I said the MyRocks migration is an example of schema change across all of UDB, in response to your claims that UDB was somehow static and did not need any schema changes.
Storage engine is part of table schema, both logically and physically. Changing storage engine is a schema change, regardless of how you accomplish it: ALTER TABLE, or trigger-based OSC tool, or RBR-based OSC tool, or old-fashioned replica swaps, or dump-and-reload as done in this case. You gloss over this by saying "some MyRocks replicas were provisioned" -- this is the schema change step, via dump-and-reload!
> It is a technical discussion, not a competition
Is it? Your approach to "technical discussion" apparently involves arguing against people's direct lived experiences; arguing about technology that you have no hands-on experience with; and arguing against strawmen points that were never made in the first place.
You keep name-dropping my former coworkers who you claim to respect, yet you post with a throwaway pseudonym.
I do not believe you are engaging in a good-faith technical discussion, so I will not be responding further.
https://www.postgresql.org/docs/8.3/hstore.html
I love Postgres just as much as anyone but Uber use case still seemed to be a better fit for MySQL. I was hopeful this would kickstart a renewed focus on features / architecture within thr Postgres community and I’m not certain anything resulted from this. Hope to be wrong obviously.
Before in short, the WAL was sent every minute and always 10MB even if there were no changes. Now it's more adaptive, actually doing nothing when they are no changes, and picking up quicker when changes begin.
I am surprised they don't mention this point because the replication was really unusable in PostgreSQL.
There are still spikes (write amplification) and other drawback from this design, but at least it doesn't shit itself under no activity.
In more recent versions there's "logical replication", which sort of "sends the queries", in that the secondary node has its own database state that does not have to be exactly identical with the primary, allowing for replication across major versions.
In my opinion though, unless you really need logical replication for some reason, stick with streaming replication. It's much easier to understand and there are fewer failure modes.
What it sends is not the queries, but a logical description of the changes to each row that were made by running the query. So an UPDATE that changes N rows would generate N changes to be applied to the corresponding rows (usually identified by primary key) on the logical replica, not a single update that had to be "re-executed".
Neither log shipping (copying WAL files one by one) nor streaming replication (sending a stream of WAL) works by sending queries. WAL segments are 16MB by default, and the default archive_timeout is 0, not 1 minute (and the archive timeout is not applicable to streaming replication anyway). There is also nothing "adaptive" about the replication—when there is no traffic, there will be ~no changes, and when there are changes, they will be sent to the replica.
I don't understand what the comment is suggesting used to happen in periods of no activity that made replication unusable, but it is also probably incorrect, and has nothing to do with the write amplification problem.
What's important is that the WAL was generated on a periodic basis and of a constant size. Say 16MB every minute. It's pretty much a plain file, that could be stored on S3/FTP.
This had a lot of drawbacks:
- Replicas were measurably late behind the current state, simply because of the built-in delay in "replication".
- It was incredibly inefficient on bandwidth and storage. Consider the time it takes to transfer large files (especially for off-site replicas) and storage costs. That further contributed to poor performance and delay.
- There could be many WAL files generated at once when there were changes happening. They would take FOREVER to be processed. It was commonplace for replicas to fall 5-10 minutes under what I consider to be minimum activity.
Long story short, the replication was reworked in a later version of PostgreSQL (3 or 4 years ago), the part about fixed size and fixed delay is not true anymore.
Started using Postgres a couple of years ago, and I now can't believe I ever lived without window functions, native arrays, custom types, etc.
I learned a lot from a book from one of the core contributors to Postgres - https://theartofpostgresql.com/. It has actual real world examples with realistic datasets to experiment with.
I always seem to learn about the things that would have made my life easier after I've already done things the hard way.
Don't forget that both can be indexed in Postgres! And the indexes are more powerful than what you can do with the equivalent relational layout, as they support efficient subset queries.
We did have a use case that wanted to find Bs for a given A, and using a GIN index on the array column along with the PG array contains operator served that up remarkably fast also.
Tried MySQL a couple years later, and every day I used it I found a new reason to never use it again.
https://sql-info.de/mysql/gotchas.html
https://fromdual.com/mysql-limitations
and deciding a database that treats 1/0 as NULL and allows inserting February 31st as a valid date wasn't worth bothering with.
Now I know better than to assume everything you do in a database is transactional. :P
Because to this day I still have to deal with locking problems unless my changes are within a narrow window: last column, no on-update or on-delete, not changing type, etc.
Mysql 8 males ddl changes atomic, but does not support transactional mode where you could rollback them.
https://www.postgresql.org/docs/devel/btree-implementation.html#BTREE-DELETION
Bottom-up deletion is specifically designed to ameliorate what the blog post refers to as "write amplification". Testing has shown that it's very effective with many workloads.
They were trying to hire (and poach) just about anyone they could around this time. Therefore, these articles are... very shiny, compared to the actual tech applied internally (note that even though Uber is referred to in the third person here, this is on uber.com and written by an Uber employee).
I worked at Uber for a year. Schemaless was... meh. Nobody really liked using it, nobody really understood it, and you weren't really allowed to host your own instance - you had to have another internal team do it for you, which didn't help the "understanding" problem.
It smelled distinctly of "not invented here" syndrome. A number of things inside Uber worked that way - the culture was so competitive and brutal, performance reviews were always a massacre, so everyone was trying to outshine their peers (or outright climb on their backs, etc).
This resulted in a LOT of "tech" being "invented" that 1:1 did something already prominent in open-source or was already an enterprise solution (probably cheaper than paying engineers to do it) but since actually achieving it and having your name on it meant you would look better for a promotion or a bonus or whatever over a colleague meant it was worth it to the individual to reinvent the wheel. Rinse and repeat over and over again.
I'm not an enemy of reinventing the wheel, mind you. But only if the new wheel works significantly better than the old one. This was rarely the case at Uber.
Postgres was still used somewhat commonly at Uber when I was there, but they were really pushing for Schemaless internally. It felt very overkill for just about everything outside the platform teams and was always, without fail, a massive pain to deal with.
Don't be fooled by these Uber engineering articles. This was PR to bolster up their OSS image to outsiders to help with hiring and poaching at the time. Things internally looked very different.
I myself was very ashamed of a company I worked for (also SF based) blog post... even the author of the post was a very well known open source maintainer of many libraries of a very popular programming language. Reading the posts in the blog was like.... I cannot believe we lie this big... internally things were just crap, and what the blog post made look like it was the norm, was just a side project of this person.
So, never trust companies blog posts by default.
I remember interviewing a candidate once who said he was excited about the role because of that A/B testing blog post, and I just thought - geez what a completely soulless bait and switch ploy.
Facebook, Amazon, Netflix Google, Microsoft comes to the top of my head but someone is free to burst my bubble.
Questions get answered quickly and with a level of detail appropriate to the person asking.
Really impressive given the usual "Enterprise(TM)" level of support from most service suppliers.
EDIT: To be clear, there is no "the DB" at Uber. It was the main database flavor that the larger teams used, but they used everything at Uber, from MySQL to postgres, to mongo, etc. Sometimes with things on top, sometimes directly. For more analytical/financial things, they used HBase/Hadoop/Cassandra, or even older things like old IBM database tech from the 80's. Really weird mix of stuff, and it really depended on which high-profile engineers they hired in which part of the company.
Even this one
> The bug we ran into only affected certain releases of Postgres 9.2 and has been fixed for a long time now. However, we still find it worrisome that this class of bug can happen at all.
(rare/specific) Data Corruption bugs around master-promotion and handoff occur in every major DB. MySQL is no different, and I've personally had to track down issues in a few popular products. If you run thousands of copies of a piece of software with different workloads and hardware configurations... you're going to find bugs.
After all - how many DBs passed Jepsen on the first shot!
Permissions are also much more complex.
What the hell are schemas?
For a company the size of Uber, I don't think spending five minutes reading the documentation for createuser is a significant burden to deployment. PostgreSQL is very easy to deploy.
I remember this being a limitation in 2008.
That forced user creation always pushed me to mysql because I hate having separate users for each service because you still have to manage and account for these extra accounts.
If you can install postgres, connect to it directly with some sort of root identity, then immediately create users and databases (as is the case with pretty much every mysql walk-through I've ever seen), it's not a default.
https://wiki.postgresql.org/wiki/First_steps
"The default authentication mode is set to 'ident' which means a given Linux user xxx can only connect as the postgres user xxx."
This alone is a complicated/confusing thing, because it's mixing system accounts with the db server accounts/access - and none of that is obvious, and doesn't quite map to how other databases handle things. I've never had to have matching system account names for user access in MSSQL, for example.
If MySQL actually allows administrative access out-of-the-box without any kind of special authorization, then that's a terribly insecure default.
With PostgreSQL, you have to switch to the superuser to configure things further because that's the only sane default you can have on an unconfigured system. If you can run commands as the user PostgreSQL is running as, you are "safe" to trust, and PostgreSQL will let you in.
UNIX ident authentication is also is extremely convenient for local applications, since you don't even have to have a password for the account, or make the PostgreSQL server network-accessible in any way.
Oracle can do the same thing, and so can MySQL, apparently (with IDENTIFIED VIA unix_socket).
MySQL user management has its own complexity in that you have to manage "user@address" identities, and the same user at different addresses or auth methods can have different permissions. How's that "simple"? With PostgreSQL, your users will at least map to the same user regardless of how they authenticate themselves.
You connect with a root account from any account, and when installed, the root account password is part of the setup process.
"and the same user at different addresses or auth methods can have different permissions"....
It joe@localhost and joe@remotehost don't have to be 'the same user' in that they're not tied to a system account in any way.
Granting different privileges to joe@local and joe@remote based on where they're coming from isn't necessarily "simple", but no one claimed it was. My own response was validating that PostgreSQL user setup was somewhat confusing.
EDIT: Bringing up "mysql sucks" points when I was explaining how PostgreSQL 'create user' stuff can be confusing just reeks of whataboutism.
In fact, the process seems to be exactly the same as with MySQL: I just tried installing the MariaDB server (dnf install mariadb-server), and it didn't prompt me for an admin user; instead, I can directly connect to the database as root using sudo, so in this case it appears to be doing the exact same thing that PostgreSQL does.
It just happens to be that by default the "postgres" superuser has a corresponding "postgres" system user that can log in via OS authentication, so you need to switch to the postgres user instead of root.
EDIT: Maybe some of the confusion stems from the fact that the documentation you linked seems to assume that the database is created according to convention to run as the "postgres" user (as it usually is). If your user didn't have the required permission to switch to the postgres user, they wouldn't be able to install the database as said user in the first place.
If you install PostgreSQL as your own user (which is not a good idea if you have any other option), you will not need to switch users as you will obviously have access to the database files and can do whatever you want, anyway.
The entire point was a reply to someone saying "it's confusing". I'm pointing out how it's confusing, and you come back with that either that 1) MySQL is confusing or 2) you don't think it's confusing. Then you point to documentation which you admit might be a point of confusion.
I've had people say "I installed postgres - here's the password". Then... I can't log in. Because I can't switch to the postgres user. Or they created some login that I can't use. Or something else... because it's somewhat confusing, unless you do this (postgres administration) as part of your regular/periodic work.
re: "I just installed Maria"... If someone uses common default package managers to set up mysql/Maria, and also for postgres, you'll be able to connect to mysql/Maria from any account. You'll only be able to connect to postgres if you switch to the postgres user.
Again - point of the comment was agreeing with an earlier comment that "this is confusing". You seem to acknowledge that it can be confusing.
You don't need to switch to the postgres user if you have another database user and password.
Are you talking about a situation where someone has installed a PostgreSQL server but hasn't configured PostgreSQL to allow password authentication? The server admin needs to allow that explicitly, because some distributions don't allow password authentication even on localhost by default, but honestly, it's all very well documented.
> If someone uses common default package managers to set up mysql/Maria, and also for postgres, you'll be able to connect to mysql/Maria from any account.
This is not the case on Fedora at least, since fresh out of the package the MySQL root user has no password; the only way to connect is via local system authentication as the root user.
But... there's absolutely nothing prohibiting you from running initdb as a regular user and then running the main daemon with your credentials. You are then the database owner and superuser. This type of thing is really useful for integration testing. But it's potentially useful when you don't care about the multiuser aspect and just want to have it run.
https://www.postgresql.org/docs/current/auth-pg-hba-conf.html
MySQL doesn’t have multiple SQL databases, you’ve been using multiple schemas.
MySQL: "2020-02-31? Whatever man, I'll just enter something..."
Another example of a database doing improper things would be Oracle mixing up the empty string with NULL. In Oracle, both are the same...
MySQL has a few more of those gotchas, e.g. regarding broken charsets (UTF-8 isn't 'utf8', it is 'utf8mb4', 'utf8' is an alias for 'utf8mb3' which is a broken subset). I wouldn't use MySQL for any data that was important to get back consistently. However, since Uber seems to be using some schemaless "we don't care"-layer anyways, that point is moot for the original article.
Much of your MySQL complaint is not a MySQL issue, but a config issue.
And yes, powerful config options are good, not bad.
https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_rollback_on_timeout
"Hey, let's just not act transactional on a timeout by default"
This setting applies to lock-wait timeouts. The default value allows applications to decide on the correct course of action: either re-try just the statement that timed out (without having to re-do the previous parts of the transaction), or rollback the transaction.
The application still receives an error on the timeout regardless of this setting. The database doesn't automatically commit the previous statements in the transaction regardless of this setting.
uft8mb4 uses more bytes than necessary for most. You get 255 varchar limits with uft8mb3, I think you only get 192 characters with uft8mb4.
utf8mb3 is definitely a broken subset, it's deprecated at the very least
MySQL 5.0 came out over 15 years ago.
It’s like asking you for an int and you entering 2.358 and saying that’s just simple.
See https://news.ycombinator.com/item?id=26272084
How long are people going to keep repeating this complaint? Literally every version of MySQL and MariaDB that allows invalid dates by default (MySQL 5.6 and older, MariaDB 10.1 and older) has reached end-of-life for upstream support from the vendor!