Details
-
Type:
Task
-
Status: Open
-
Priority:
Minor
-
Resolution: Unresolved
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
Description
Take the following example:
DROP TABLE IF EXISTS t1;
CREATE TABLE t1(t TIMESTAMP);
INSERT INTO t1 VALUES('2014-05-15 17:30:00.200');
SELECT * FROM t1 WHERE t < '2014-05-15 17:30:00.100';
With MySQL 5.5 it will return an empty set as the sub second part will be ignored / truncated in both cases, with MariaDB (and with MySQL 5.6, too) with sub second support it will return a result row though as in the first case the comparison comes down to:
.... 17:30:00 < .... 17:30:00
while in the 2nd time it is evaluated as
... 17:30:00.000 < ... 17:30:00.100
as only the first timestamp is truncated when being assigned to the TIMESTAMP column (which defaults to TIMESTAMP(0), so ignoring the subsecond part), while the literal timestamp constant is taken "as is" without truncation
Feature request: wouldn't it make more sense to truncate the more precise timestamp to the precision of the less precise one before doing the comparison?
One problem with the current behaviour is when replicating from MySQL 5.5 to MariaDB 5.5 if an application generates queries with sub second timestamps, relying on the sub second part to simply be ignored in all cases ...
Gliffy Diagrams
Attachments
Activity
- All
- Comments
- Work Log
- History
- Activity
- Transitions
Thinking again the current behaviour is in line with how comparing an INT to a FLOAT works, in that case the FLOAT is not implicitly casted down to an INT either, so when comparing
3 < 3.14
the result is true, while it would be false if the floating point value 3.14 was casted down to the integer value of 3 first ...
Maybe a configuration setting that disables microsecond behaviour alltogether might be a better idea to solve the replication compatibility problem?