Our MySQL cluster configuration is already finished, now it's time to test it. On the cluster management server (loadb1.example.com), run the cluster management client ndb_mgm to check if the cluster nodes are connected:
loadb1.example.com:
ndb_mgm
You should see this:
-- NDB Cluster -- Management Client --ndb_mgm>
Now type show; at the command prompt:
show;
The output should be like this:
ndb_mgm> show;Connected to Management Server at: localhost:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=2 @192.168.0.101 (Version: 5.0.19, Nodegroup: 0, Master)id=3 @192.168.0.102 (Version: 5.0.19, Nodegroup: 0)[ndb_mgmd(MGM)] 1 node(s)id=1 @192.168.0.103 (Version: 5.0.19)[mysqld(API)] 2 node(s)id=4 @192.168.0.101 (Version: 5.0.19)id=5 @192.168.0.102 (Version: 5.0.19)ndb_mgm>
If you see that your nodes are connected, then everything's ok!
Type
quit;
to leave the ndb_mgm client console.
Now we create a test database with a test table and some data on sql1.example.com:
sql1.example.com:
mysql -u root -pCREATE DATABASE mysqlclustertest;USE mysqlclustertest;CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;INSERT INTO testtable () VALUES (1);SELECT * FROM testtable;quit;
(Have a look at the CREATE statment: We must use ENGINE=NDBCLUSTER for all database tables that we want to get clustered! If you use another engine, then clustering will not work!)
The result of the SELECT statement should be:
mysql> SELECT * FROM testtable; ------ | i | ------ | 1 | ------ 1 row in set (0.03 sec)
Now we create the same database on sql2.example.com (yes, we still have to create it, but afterwards testtable and its data should be replicated to sql2.example.com because testtable uses ENGINE=NDBCLUSTER):
sql2.example.com:
mysql -u root -pCREATE DATABASE mysqlclustertest;USE mysqlclustertest;SELECT * FROM testtable;
The SELECT statement should deliver you the same result as before on sql1.example.com:
mysql> SELECT * FROM testtable; ------ | i | ------ | 1 | ------ 1 row in set (0.04 sec)
So the data was replicated from sql1.example.com to sql2.example.com. Now we insert another row into testtable:
sql2.example.com:
INSERT INTO testtable () VALUES (2);quit;
Now let's go back to sql1.example.com and check if we see the new row there:
sql1.example.com:
mysql -u root -pUSE mysqlclustertest;SELECT * FROM testtable;quit;
You should see something like this:
mysql> SELECT * FROM testtable; ------ | i | ------ | 1 || 2 | ------ 2 rows in set (0.05 sec)
So both MySQL cluster nodes alwas have the same data!
Now let's see what happens if we stop node 1 (sql1.example.com): Run
sql1.example.com:
killall ndbd
and check with
ps aux | grep ndbd | grep -iv grep
that all ndbd processes have terminated. If you still see ndbd processes, run another
killall ndbd
until all ndbd processes are gone.
Now let's check the cluster status on our management server (loadb1.example.com):
loadb1.example.com:
ndb_mgm
On the ndb_mgm console, issue
show;
and you should see this:
ndb_mgm> show;Connected to Management Server at: localhost:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=2 (not connected, accepting connect from 192.168.0.101)id=3 @192.168.0.102 (Version: 5.0.19, Nodegroup: 0, Master)[ndb_mgmd(MGM)] 1 node(s)id=1 @192.168.0.103 (Version: 5.0.19)[mysqld(API)] 2 node(s)id=4 @192.168.0.101 (Version: 5.0.19)id=5 @192.168.0.102 (Version: 5.0.19)ndb_mgm>
You see, sql1.example.com is not connected anymore.
Type
quit;
to leave the ndb_mgm console.
Let's check sql2.example.com:
sql2.example.com:
mysql -u root -pUSE mysqlclustertest;SELECT * FROM testtable;quit;
The result of the SELECT query should still be
mysql> SELECT * FROM testtable; ------ | i | ------ | 1 || 2 | ------ 2 rows in set (0.17 sec)
Ok, all tests went fine, so let's start our sql1.example.com node again:
sql1.example.com:
ndbd