Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-3848

MariaDB Galera Cluster Memory Usage growing

    Details

    • Type: Bug
    • Status: Open
    • Priority: Minor
    • Resolution: Unresolved
    • Affects Version/s: 5.5.25-galera
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Environment:
      Centos 6.1 64-bit

      Description

      I'm evaluating MariaDB cluster and I see massive grow in memory usage compared to Standalone MariaDB installation. Cluster tests are done in 3 Node envinronment. With Centos 6.1 and 1GB RAM. Basically replication is working fine between these nodes, but memory usage grows all the time without releasing it which makes it kinda unusable.

      3 Node cluster Setup running following packages
      MariaDB-server-5.5.25-1.x86_64
      MariaDB-compat-5.5.25-1.x86_64
      MariaDB-common-5.5.25-1.x86_64
      MariaDB-client-5.5.25-1.x86_64

      Node 1 my.cnf:

      [mysqld]
      server_id=1
      datadir=/var/lib/mysql
      user=mysql
      log-bin=mysql-bin
      log_slave_updates = 1
      binlog_format=ROW
      default_storage_engine=InnoDB
      max_allowed_packet = 100M

      wsrep_provider=/usr/lib64/galera/libgalera_smm.so
      wsrep_cluster_address=gcomm://
      wsrep_slave_threads=2
      wsrep_cluster_name=mycluster
      wsrep_sst_method=rsync
      wsrep_node_name=node1
      wsrep_replicate_myisam=1

      innodb_locks_unsafe_for_binlog=1
      innodb_autoinc_lock_mode=2
      innodb_buffer_pool_size = 8M
      innodb_additional_mem_pool_size = 4M
      innodb_log_file_size = 64M
      innodb_log_buffer_size = 8M
      innodb_flush_method = O_DIRECT
      innodb_flush_log_at_trx_commit = 1

      Node 2/3 has identical my.cnf except wsrep_node_name and wsrep_cluster_address parameters.

      create database ptest;
      use ptest;
      create table ti2(c1 int auto_increment primary key, c2 char(255)) engine=InnoDB;
      insert into ti2(c2) values('abc');
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;
      insert into ti2(c2) select c2 from ti2;

      node1

      PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
      7133 mysql 20 0 1174m 104m 7548 S 0.0 10.5 0:00.12 mysqld

      PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
      7133 mysql 20 0 1943m 676m 18m S 0.7 67.8 0:59.21 mysqld

      Nodes 2 and 3 has almost same memory usage so eventually all nodes will run out memory if inserts are continued.

      Executing same test with standalone MariaDB with same InnoDB settings keeps memory as low as 116M.

      STANDALONE MARIADB

      MariaDB-common-5.5.28-1.x86_64
      MariaDB-client-5.5.28-1.x86_64
      MariaDB-server-5.5.28-1.x86_64
      MariaDB-compat-5.5.28-1.x86_64

      PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
      2333 mysql 20 0 786m 72m 6028 S 0.0 7.3 0:00.24 mysqld

      PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
      2333 mysql 20 0 786m 116m 6960 S 0.0 11.7 0:33.72 mysqld

      What could be causing this ? MariaDB cluster guide states are memory usage should increase only slightly compared to standalone installation.

        Gliffy Diagrams

          Attachments

            Activity

            Hide
            elenst Elena Stepanova added a comment -

            I won't double-check it since it's consistent with my earlier observations, see MDEV-466. Assigning to Seppo for further analysis and decisions.

            Show
            elenst Elena Stepanova added a comment - I won't double-check it since it's consistent with my earlier observations, see MDEV-466 . Assigning to Seppo for further analysis and decisions.
            Hide
            seppo Seppo Jaakola added a comment -

            This test runs larger and larger transactions and in the end the size of the transaction is ~1M rows. Galera does not currently support arbitarily large transactions. There are configuration variables 'wsrep_max_ws_rows' and 'wsrep_max_ws_size' to abort huge transaction before OOM would happen.

            Huge transaction support is in design phase and will be part of some future release.

            Show
            seppo Seppo Jaakola added a comment - This test runs larger and larger transactions and in the end the size of the transaction is ~1M rows. Galera does not currently support arbitarily large transactions. There are configuration variables 'wsrep_max_ws_rows' and 'wsrep_max_ws_size' to abort huge transaction before OOM would happen. Huge transaction support is in design phase and will be part of some future release.

              People

              • Assignee:
                seppo Seppo Jaakola
                Reporter:
                karileh Kari Lehtinen
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated: