Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 10.0.7
    • Fix Version/s: 10.0.8
    • Component/s: None
    • Labels:
    • Environment:
      Debian GNU/Linux 7: 3.2.38-vs2.3.2.16-beng #1 SMP Tue Feb 12 10:48:52 GMT 2013 x86_64 GNU/Linux

      Default config with increased innodb buffer size to 20GB and file per table turned on.

      Description

      Each test CREATEs and DROPs 1024 tables. I've checked that CREATE time is constant, while DROP time increases every time it is invoked.

      while true; do time perl -e 'print "CREATE TABLE foo (id int) Engine=InnoDB; DROP TABLE foo;\n" for 1..1024' | mysql test; done
      
      real	0m2.351s
      user	0m0.020s
      sys	0m0.040s
      
      real	0m2.340s
      user	0m0.010s
      sys	0m0.060s
      
      real	0m2.458s
      user	0m0.030s
      sys	0m0.050s
      
      real	0m2.657s
      user	0m0.030s
      sys	0m0.040s
      
      real	0m2.914s
      user	0m0.020s
      sys	0m0.040s
      
      real	0m3.262s
      user	0m0.020s
      sys	0m0.050s
      
      real	0m3.702s
      user	0m0.020s
      sys	0m0.040s
      
      real	0m4.187s
      user	0m0.030s
      sys	0m0.040s
      
      real	0m4.552s
      user	0m0.040s
      sys	0m0.030s
      
      real	0m4.966s
      user	0m0.020s
      sys	0m0.050s
      
      real	0m5.327s
      user	0m0.050s
      sys	0m0.030s
      
      real	0m5.669s
      user	0m0.000s
      sys	0m0.080s
      
      real	0m6.017s
      user	0m0.010s
      sys	0m0.070s
      
      real	0m6.322s
      user	0m0.030s
      sys	0m0.050s
      
      real	0m6.659s
      user	0m0.020s
      sys	0m0.070s
      
      real	0m6.872s
      user	0m0.020s
      sys	0m0.050s
      

      DROP time becomes stable when database instance is using maximum allowed amount of RAM. In my case test time can raise from 2.3 second to over 1 minute.

      I've checked MariaDB 5.5.34 and PerconaDB 5.6 and they are not affected by this issue - DROP time is constant despite of amount of used RAM.

        Gliffy Diagrams

          Attachments

            Activity

            Hide
            serg Sergei Golubchik added a comment -

            Elena Stepanova, could you please check whether it's because of the new InnoDB or, may be, because of the table cache changes? Like, try in 10.0 with InnoDB and with XtraDB. And try in older 10.0 before table cache optimizations.

            Show
            serg Sergei Golubchik added a comment - Elena Stepanova , could you please check whether it's because of the new InnoDB or, may be, because of the table cache changes? Like, try in 10.0 with InnoDB and with XtraDB. And try in older 10.0 before table cache optimizations.
            Hide
            elenst Elena Stepanova added a comment -

            Hi Pawel,

            Do I interpret your output correctly, that it took between 2 seconds to create and drop 1024 tables at first, and then the value was growing gradually? Or did you change the number of tables created per test in the command line comparing to the one used to collect the results?
            Could you please attach your cnf file?

            I'm asking because it generally takes me many times longer to create InnoDB tables (even on SSD), so I have a difficulty reproducing, the dynamics is very different.

            Show
            elenst Elena Stepanova added a comment - Hi Pawel, Do I interpret your output correctly, that it took between 2 seconds to create and drop 1024 tables at first, and then the value was growing gradually? Or did you change the number of tables created per test in the command line comparing to the one used to collect the results? Could you please attach your cnf file? I'm asking because it generally takes me many times longer to create InnoDB tables (even on SSD), so I have a difficulty reproducing, the dynamics is very different.
            Hide
            bbkr Pawel Pabian added a comment -

            > Do I interpret your output correctly, that it took between 2 seconds to create and drop 1024 tables at first, and then the value was growing gradually?

            Yes. Single test does DROP and CREATE the same table 1024 times. Amount of operations is the same for every test. Each test starts and ends with empty database. Time of every following test is longer.

            > Could you please attach your cnf file?

            We do not have it anymore. But we reproduced this issue with default config which had only innodb buffer size increased to 20GB and file per table turned on. Generally speaking - the bigger innodb buffer is the worse DROP time you will finally reach.

            > I'm asking because it generally takes me many times longer to create InnoDB tables (even on SSD).

            Test server has several SSDs in RAID, typical desktop won't even get close to this I/O performance. That's why each test packs 1024 CREATE-DROP actions to make time increase visible. Try to set innodb buffer to maximum amount of RAM you have and decrease amount of actions in each test. You should observe significant DROP time increase between cold database and the moment when it takes all available RAM.

            Show
            bbkr Pawel Pabian added a comment - > Do I interpret your output correctly, that it took between 2 seconds to create and drop 1024 tables at first, and then the value was growing gradually? Yes. Single test does DROP and CREATE the same table 1024 times. Amount of operations is the same for every test. Each test starts and ends with empty database. Time of every following test is longer. > Could you please attach your cnf file? We do not have it anymore. But we reproduced this issue with default config which had only innodb buffer size increased to 20GB and file per table turned on. Generally speaking - the bigger innodb buffer is the worse DROP time you will finally reach. > I'm asking because it generally takes me many times longer to create InnoDB tables (even on SSD). Test server has several SSDs in RAID, typical desktop won't even get close to this I/O performance. That's why each test packs 1024 CREATE-DROP actions to make time increase visible. Try to set innodb buffer to maximum amount of RAM you have and decrease amount of actions in each test. You should observe significant DROP time increase between cold database and the moment when it takes all available RAM.
            Hide
            elenst Elena Stepanova added a comment -

            So far all my attempts to reproduce the problem failed.

            • installed the release 10.0.7 packages from deb repository (which I assume you are using) on Wheezy x86_64, to get the exact same default config;
            • set buffer pool to 8 Gb which was the highest I could do;
            • made sure file-per-table is ON (it is actually in the config already);
            • reduced the number of tables to make the test more dynamic on my non-SSD-RAID machine;
            • ran the test for hours.
              I still didn't get any visible time growth between the runs.
              So, I suppose there is something else to it, apart from the code, build and config.

            The next thing to check is whether 20 Gb of buffer pool makes the critical difference. I'm going to acquire one of benchmarking machines with decent amount of memory to see if I can reproduce it there.

            Show
            elenst Elena Stepanova added a comment - So far all my attempts to reproduce the problem failed. installed the release 10.0.7 packages from deb repository (which I assume you are using) on Wheezy x86_64, to get the exact same default config; set buffer pool to 8 Gb which was the highest I could do; made sure file-per-table is ON (it is actually in the config already); reduced the number of tables to make the test more dynamic on my non-SSD-RAID machine; ran the test for hours. I still didn't get any visible time growth between the runs. So, I suppose there is something else to it, apart from the code, build and config. The next thing to check is whether 20 Gb of buffer pool makes the critical difference. I'm going to acquire one of benchmarking machines with decent amount of memory to see if I can reproduce it there.
            Hide
            elenst Elena Stepanova added a comment -

            I ran the test with 40 Gb of buffer pool with a single disk and could not get the described trend; but when I switched to a raid, I finally observed it. It's not as monotonic as in Pawel's description, but still undoubtedly growing:

            MariaDB [(none)]> select @@innodb_buffer_pool_size, @@innodb_file_per_table, @@version;
            +---------------------------+-------------------------+--------------------+
            | @@innodb_buffer_pool_size | @@innodb_file_per_table | @@version          |
            +---------------------------+-------------------------+--------------------+
            |               42949672960 |                       1 | 10.0.7-MariaDB-log |
            +---------------------------+-------------------------+--------------------+
            1 row in set (0.00 sec)
            

            Single disk (had to reduce the number of threads per step to 64 to make the duration acceptable:

            real	0m16.266s
            user	0m0.008s
            sys	0m0.004s
            
            real	0m16.358s
            user	0m0.008s
            sys	0m0.008s
            
            real	0m21.776s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m20.991s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m21.057s
            user	0m0.004s
            sys	0m0.004s
            
            real	0m20.110s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m20.984s
            user	0m0.004s
            sys	0m0.008s
            
            real	0m21.123s
            user	0m0.008s
            sys	0m0.004s
            
            real	0m20.044s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m21.609s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m20.733s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m19.460s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m19.989s
            user	0m0.012s
            sys	0m0.000s
            
            real	0m20.632s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m20.594s
            user	0m0.000s
            sys	0m0.016s
            
            real	0m20.929s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m21.351s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m22.042s
            user	0m0.004s
            sys	0m0.004s
            
            real	0m19.857s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m20.842s
            user	0m0.000s
            sys	0m0.016s
            
            real	0m21.328s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m21.250s
            user	0m0.012s
            sys	0m0.000s
            
            real	0m21.735s
            user	0m0.004s
            sys	0m0.008s
            
            real	0m21.219s
            user	0m0.004s
            sys	0m0.008s
            
            real	0m20.284s
            user	0m0.004s
            sys	0m0.004s
            
            real	0m20.410s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m22.342s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m18.935s
            user	0m0.004s
            sys	0m0.004s
            
            real	0m19.715s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m20.947s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m22.217s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m20.248s
            user	0m0.012s
            sys	0m0.000s
            
            real	0m20.498s
            user	0m0.008s
            sys	0m0.004s
            
            real	0m21.124s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m20.855s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m21.280s
            user	0m0.004s
            sys	0m0.004s
            
            real	0m21.064s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m21.580s
            user	0m0.004s
            sys	0m0.004s
            
            real	0m21.045s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m22.437s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m20.818s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m21.363s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m19.788s
            user	0m0.004s
            sys	0m0.008s
            
            real	0m21.478s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m21.405s
            user	0m0.008s
            sys	0m0.000s
            
            real	0m22.389s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m21.087s
            user	0m0.008s
            sys	0m0.004s
            
            real	0m20.925s
            user	0m0.004s
            sys	0m0.012s
            
            real	0m21.346s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m21.131s
            user	0m0.008s
            sys	0m0.004s
            
            real	0m22.259s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m20.104s
            user	0m0.004s
            sys	0m0.008s
            
            real	0m20.139s
            user	0m0.004s
            sys	0m0.008s
            
            real	0m19.996s
            user	0m0.012s
            sys	0m0.004s
            
            real	0m19.845s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m22.450s
            user	0m0.012s
            sys	0m0.000s
            
            real	0m20.469s
            user	0m0.000s
            sys	0m0.008s
            
            real	0m21.695s
            user	0m0.004s
            sys	0m0.004s
            
            real	0m20.344s
            user	0m0.004s
            sys	0m0.012s
            
            real	0m21.389s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m21.628s
            user	0m0.000s
            sys	0m0.012s
            
            real	0m21.183s
            user	0m0.000s
            sys	0m0.008s
            

            With raid (returned to original 1024 tables per step):

            real	0m2.801s
            user	0m0.000s
            sys	0m0.048s
            
            real	0m3.806s
            user	0m0.128s
            sys	0m0.024s
            
            real	0m2.880s
            user	0m0.000s
            sys	0m0.060s
            
            real	0m4.040s
            user	0m0.052s
            sys	0m0.108s
            
            real	0m3.173s
            user	0m0.024s
            sys	0m0.040s
            
            real	0m4.466s
            user	0m0.056s
            sys	0m0.096s
            
            real	0m3.734s
            user	0m0.028s
            sys	0m0.064s
            
            real	0m6.014s
            user	0m0.104s
            sys	0m0.068s
            
            real	0m4.866s
            user	0m0.044s
            sys	0m0.100s
            
            real	0m8.620s
            user	0m0.108s
            sys	0m0.072s
            
            real	0m6.300s
            user	0m0.080s
            sys	0m0.112s
            
            real	0m11.630s
            user	0m0.080s
            sys	0m0.100s
            
            real	0m7.813s
            user	0m0.108s
            sys	0m0.100s
            
            real	0m14.523s
            user	0m0.060s
            sys	0m0.128s
            
            real	0m9.092s
            user	0m0.088s
            sys	0m0.108s
            
            real	0m17.030s
            user	0m0.076s
            sys	0m0.124s
            
            real	0m9.830s
            user	0m0.104s
            sys	0m0.104s
            
            real	0m19.210s
            user	0m0.140s
            sys	0m0.068s
            
            real	0m18.820s
            user	0m0.076s
            sys	0m0.120s
            
            real	0m20.335s
            user	0m0.096s
            sys	0m0.104s
            
            real	0m21.628s
            user	0m0.060s
            sys	0m0.120s
            
            real	0m22.007s
            user	0m0.100s
            sys	0m0.104s
            
            real	0m23.429s
            user	0m0.076s
            sys	0m0.116s
            
            real	0m23.655s
            user	0m0.096s
            sys	0m0.112s
            
            real	0m25.043s
            user	0m0.052s
            sys	0m0.136s
            
            real	0m25.193s
            user	0m0.104s
            sys	0m0.120s
            
            real	0m26.684s
            user	0m0.096s
            sys	0m0.100s
            

            Now when it became reproducible, I should be able to find out the source as Sergei Golubchik requested.

            Show
            elenst Elena Stepanova added a comment - I ran the test with 40 Gb of buffer pool with a single disk and could not get the described trend; but when I switched to a raid, I finally observed it. It's not as monotonic as in Pawel's description, but still undoubtedly growing: MariaDB [(none)]> select @@innodb_buffer_pool_size, @@innodb_file_per_table, @@version; +---------------------------+-------------------------+--------------------+ | @@innodb_buffer_pool_size | @@innodb_file_per_table | @@version | +---------------------------+-------------------------+--------------------+ | 42949672960 | 1 | 10.0.7-MariaDB-log | +---------------------------+-------------------------+--------------------+ 1 row in set (0.00 sec) Single disk (had to reduce the number of threads per step to 64 to make the duration acceptable: real 0m16.266s user 0m0.008s sys 0m0.004s real 0m16.358s user 0m0.008s sys 0m0.008s real 0m21.776s user 0m0.000s sys 0m0.008s real 0m20.991s user 0m0.008s sys 0m0.000s real 0m21.057s user 0m0.004s sys 0m0.004s real 0m20.110s user 0m0.000s sys 0m0.008s real 0m20.984s user 0m0.004s sys 0m0.008s real 0m21.123s user 0m0.008s sys 0m0.004s real 0m20.044s user 0m0.000s sys 0m0.012s real 0m21.609s user 0m0.000s sys 0m0.012s real 0m20.733s user 0m0.000s sys 0m0.012s real 0m19.460s user 0m0.000s sys 0m0.012s real 0m19.989s user 0m0.012s sys 0m0.000s real 0m20.632s user 0m0.008s sys 0m0.000s real 0m20.594s user 0m0.000s sys 0m0.016s real 0m20.929s user 0m0.000s sys 0m0.008s real 0m21.351s user 0m0.000s sys 0m0.012s real 0m22.042s user 0m0.004s sys 0m0.004s real 0m19.857s user 0m0.008s sys 0m0.000s real 0m20.842s user 0m0.000s sys 0m0.016s real 0m21.328s user 0m0.000s sys 0m0.008s real 0m21.250s user 0m0.012s sys 0m0.000s real 0m21.735s user 0m0.004s sys 0m0.008s real 0m21.219s user 0m0.004s sys 0m0.008s real 0m20.284s user 0m0.004s sys 0m0.004s real 0m20.410s user 0m0.008s sys 0m0.000s real 0m22.342s user 0m0.000s sys 0m0.008s real 0m18.935s user 0m0.004s sys 0m0.004s real 0m19.715s user 0m0.000s sys 0m0.012s real 0m20.947s user 0m0.008s sys 0m0.000s real 0m22.217s user 0m0.000s sys 0m0.012s real 0m20.248s user 0m0.012s sys 0m0.000s real 0m20.498s user 0m0.008s sys 0m0.004s real 0m21.124s user 0m0.008s sys 0m0.000s real 0m20.855s user 0m0.008s sys 0m0.000s real 0m21.280s user 0m0.004s sys 0m0.004s real 0m21.064s user 0m0.008s sys 0m0.000s real 0m21.580s user 0m0.004s sys 0m0.004s real 0m21.045s user 0m0.000s sys 0m0.008s real 0m22.437s user 0m0.008s sys 0m0.000s real 0m20.818s user 0m0.000s sys 0m0.008s real 0m21.363s user 0m0.000s sys 0m0.008s real 0m19.788s user 0m0.004s sys 0m0.008s real 0m21.478s user 0m0.008s sys 0m0.000s real 0m21.405s user 0m0.008s sys 0m0.000s real 0m22.389s user 0m0.000s sys 0m0.008s real 0m21.087s user 0m0.008s sys 0m0.004s real 0m20.925s user 0m0.004s sys 0m0.012s real 0m21.346s user 0m0.000s sys 0m0.012s real 0m21.131s user 0m0.008s sys 0m0.004s real 0m22.259s user 0m0.000s sys 0m0.012s real 0m20.104s user 0m0.004s sys 0m0.008s real 0m20.139s user 0m0.004s sys 0m0.008s real 0m19.996s user 0m0.012s sys 0m0.004s real 0m19.845s user 0m0.000s sys 0m0.012s real 0m22.450s user 0m0.012s sys 0m0.000s real 0m20.469s user 0m0.000s sys 0m0.008s real 0m21.695s user 0m0.004s sys 0m0.004s real 0m20.344s user 0m0.004s sys 0m0.012s real 0m21.389s user 0m0.000s sys 0m0.012s real 0m21.628s user 0m0.000s sys 0m0.012s real 0m21.183s user 0m0.000s sys 0m0.008s With raid (returned to original 1024 tables per step): real 0m2.801s user 0m0.000s sys 0m0.048s real 0m3.806s user 0m0.128s sys 0m0.024s real 0m2.880s user 0m0.000s sys 0m0.060s real 0m4.040s user 0m0.052s sys 0m0.108s real 0m3.173s user 0m0.024s sys 0m0.040s real 0m4.466s user 0m0.056s sys 0m0.096s real 0m3.734s user 0m0.028s sys 0m0.064s real 0m6.014s user 0m0.104s sys 0m0.068s real 0m4.866s user 0m0.044s sys 0m0.100s real 0m8.620s user 0m0.108s sys 0m0.072s real 0m6.300s user 0m0.080s sys 0m0.112s real 0m11.630s user 0m0.080s sys 0m0.100s real 0m7.813s user 0m0.108s sys 0m0.100s real 0m14.523s user 0m0.060s sys 0m0.128s real 0m9.092s user 0m0.088s sys 0m0.108s real 0m17.030s user 0m0.076s sys 0m0.124s real 0m9.830s user 0m0.104s sys 0m0.104s real 0m19.210s user 0m0.140s sys 0m0.068s real 0m18.820s user 0m0.076s sys 0m0.120s real 0m20.335s user 0m0.096s sys 0m0.104s real 0m21.628s user 0m0.060s sys 0m0.120s real 0m22.007s user 0m0.100s sys 0m0.104s real 0m23.429s user 0m0.076s sys 0m0.116s real 0m23.655s user 0m0.096s sys 0m0.112s real 0m25.043s user 0m0.052s sys 0m0.136s real 0m25.193s user 0m0.104s sys 0m0.120s real 0m26.684s user 0m0.096s sys 0m0.100s Now when it became reproducible, I should be able to find out the source as Sergei Golubchik requested.
            Hide
            elenst Elena Stepanova added a comment -

            The bug existed in InnoDB of 5.6.10, reproducible on MySQL 5.6.10. It was fixed in 10.0.8 release by the merge of InnoDB 5.6.14, in the following revision:

                revno: 3965.1.13
                revision-id: sergii@pisem.net-20140201083326-wlzdm7dash8h58m8
                parent: sergii@pisem.net-20140201083259-p6b7jmxguoxucnyu
                fixes bug: https://mariadb.atlassian.net/browse/MDEV-5574
                committer: Sergei Golubchik <sergii@pisem.net>
                branch nick: 10.0
                timestamp: Sat 2014-02-01 09:33:26 +0100
                message:
                  MDEV-5574 Set AUTO_INCREMENT below max value of column.
                  Update InnoDB to 5.6.14
                  Apply MySQL-5.6 hack for MySQL Bug#16434374
                  Move Aria-only HA_RTREE_INDEX from my_base.h to maria_def.h (breaks an assert in InnoDB)
                  Fix InnoDB memory leak
            
            Show
            elenst Elena Stepanova added a comment - The bug existed in InnoDB of 5.6.10, reproducible on MySQL 5.6.10. It was fixed in 10.0.8 release by the merge of InnoDB 5.6.14, in the following revision: revno: 3965.1.13 revision-id: sergii@pisem.net-20140201083326-wlzdm7dash8h58m8 parent: sergii@pisem.net-20140201083259-p6b7jmxguoxucnyu fixes bug: https://mariadb.atlassian.net/browse/MDEV-5574 committer: Sergei Golubchik <sergii@pisem.net> branch nick: 10.0 timestamp: Sat 2014-02-01 09:33:26 +0100 message: MDEV-5574 Set AUTO_INCREMENT below max value of column. Update InnoDB to 5.6.14 Apply MySQL-5.6 hack for MySQL Bug#16434374 Move Aria-only HA_RTREE_INDEX from my_base.h to maria_def.h (breaks an assert in InnoDB) Fix InnoDB memory leak

              People

              • Assignee:
                Unassigned
                Reporter:
                bbkr Pawel Pabian
              • Votes:
                1 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: