Showing posts with label MySQL. Show all posts
Showing posts with label MySQL. Show all posts

2015-12-16

Pulling Single DB from Mysqldump's --all-databases backup files

When backing up a MySQL DB, it's often times quick and easy to do a simple:
mysqldump --all-databases --single-transaction | bzip2 > <filename>.sql.bz2

But then someone comes along and asks you "hey, you know that one blog out of 50 that you host? Well, I messed it up and need a restore." You don't want to restore the full dump, but you know you have the data. Here's a quick way to rip out of that file, exactly what you need (after, of course, you extract it from whatever compression you use):

#!/bin/bash
head -n40 $2 | sed -n '/^-- MySQL/,/^-- Current Database:/p'|grep SET > $1-dump.sql
sed -n '/^-- Current Database: `$1`/,/^-- Current Database: `/p' $2 >> $1-dump.sql
$1 = DB Requested
$2 = Input Filename
That script will leave you with a file that has the SET lines before and after the dump, as well as the contents of just the DB you're looking for. Then, just run 'mysql [database_name] < [file_name]' and you'll have your data back to what they wanted!

2013-11-22

Writing a C/C++ App run from apache

A few weeks back I started wondering on ways to improve the performance of certain areas of my site (as I do periodically) and while I came across examples of PHP compilers in their many forms, I started wondering if I should just go whole hog and write some parts of the application right in C/C++. Sure, everyone is jumping on nodejs, but I've worked in both C and C++ languages before and while it has been sometime, I thought it might be an interesting exercise to try writing some of the back end processes in either one. After all, I'm running mysql as my db, and there are mysql c headers and api, so it should be fairly straight forward. I found quickly that some of this documentation was lacking. Sure I can create a C/C++ executable that'll give me info from my DB on demand, but getting it back to the requester was being a pain. Digging around and experimenting led me to finding a few things that I'd like to share here.
int MIMEHeader() { 
     cout << "Content-type: text/html" << endl << endl << endl; 
     cout << "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">"<<endl; 
     cout <<"<HTML xlmns='http://www.w3.org/1999/xhtml' xml:lang='en' lang='en'>" <<endl;
}

int main() {
     MIMEHeader();
     cout<<">head<>/head><body>"<<endl;        
     cout<<"Hello World!"<<endl;    
     cout<<"</body>"<<endl;
     return TRUE;
}
First off, yes, you really do need three "endl"'s on the first line. One finishes the current line and the next to tell apache you really mean that this is html and it should serve it. The next two sets up the doctype and get things ready for you to output your info. Calling this function from our main function causes things to be ready for output. In the example here, we tack on the proverbial "Hello World" example.
The next step here is to get apache to proxy the output of our program to the end user's web browser. You'll need this block in your sections affected.
<Directory "/path/your/website/x">
     AllowOverride None
     Options ExecCGI
     Order allow,deny
     Allow from all
</Directory>
ScriptAlias /x /path/to/your/website/x
In the example, I used /x as my subdir of compiled code. So, resolving it would be "http://example.com/x/programName".

At this point you're good to go, and boy is it FAST. In my non-exhaustive testing, a simple App involving a MySQL query takes less than 1ms more than the actual MySQL query. Obviously, your results may vary. Here's the example C++ I wrote for a start before I started adding more fancy api type features to it. I compiled it on CentOS 5.4 with stock CentOS distributed MySQL 5.1:

// Compile this using:
// g++ -o mysqltest mysqltest.cpp -L/usr/lib64/mysql -lmysqlclient -I/usr/include/mysql && ./mysqltest
//

#include <iostream>
#include <mysql/mysql.h>
using namespace std;
MYSQL *connection, mysql;
MYSQL_RES *result;
MYSQL_ROW row;
int query_state;
int MIMEHeader() {
cout << "Content-type: text/html"<<endl<<endl<<endl<<"<HTML><BODY><PRE>";
}
int Footer() { 
cout << "</PRE></BODY></HTML>";
}
int main() {
char * env;
MIMEHeader();
mysql_init(&mysql);
connection = mysql_real_connect(&mysql,"localhost","myUs3r","myPa55word","myBookmarks",0,0,0);
if (connection == NULL) { 
cout << mysql_error(&mysql) << endl;
return 1;
query_state=mysql_query(connection,"select * from links where id like '%blogger.com';");
if (query_state!=0) {
cout << mysql_error(&mysql) << endl;
                return 1;
}
result = mysql_store_result(connection);
cout << "<table>"<<endl;
while  (( row = mysql_fetch_row(result)) != NULL ) {
cout << "<tr><td>" << row[0] <<"</td><td>"<< row[1] <<"</td><td>"<< row[2] <<"</td></tr>"<<endl;
}
cout << "</table>"<<endl;
Footer();
mysql_free_result(result);
mysql_close(connection);
return 0;
}

2013-09-23

Setting up Replication in MySQL of 3+ cluster nodes.

I've been running a multinode MySQL Replication Loop for a while and thought it'd be useful to write up how replication is synced between the peers. I'm writing this up as if there are three nodes, Adam, Ben and Charlie in a loop, but you can do this with any number of nodes. In our setup Adam and Ben are in our primary location, with Charlie sitting in our DR setup. This means Charlie is always on but doesn't get many queries on a regular basis. Adam and Ben have heartbeatd setup as a failover pair so that when Adam has a vacation (downtime) everything fails over to Ben and continues to run. We do this simply with a floating IP.

1. Decide which node to start with

Because of this setup, prior to doing a resync, it's important to connect to the node who currently has the active IP and start things from there. To find out who that is, connect to the nodes and run:
$ /sbin/ip addr
You'll see something like:
: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:50:56:82:29:01 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.56/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.59/24 brd 192.168.1.255 scope global secondary eth0
Note how this one has both the primary (eth0) and "secondary eth0" address on it. Also, we know our floating IP is 192.167.1.59, so we know it's here. Because of that, this node has the latest info and will thus be the beginning of our resync process.

2. Stop the Slaves

To stop the slaves, connect to mysql as follows:
$ mysql -p -u root

mysql> stop slave;
Query OK, 0 rows affected (0.01 sec)
I recommend going ahead and stopping the slave on both Adam and Charlie as well at this point.

3. Get a fresh mysqldump

To start the resync process, we need to take a dump of all databases with masterdata in a single-transation:
/bin/mysqldump -p -u root --all-databases --single-transaction --master-data | bzip2 > 20120912-midsync-`hostname`.sql.bz2
This will make sure that we get all the info we need in the backup. Without Master Data we'd have to remember when we took the backup when we restored it in order to not miss any updates, which is almost impossible when running this against an active loop. When this is done, transfer the backup from Adam to Ben. Then, stop the slaves around the circle.

4. Transfer dump to the next node in the circle

You can use scp or what ever you'd like. just get the backup over there.

5. Import the dump

Then, on the first target (Ben in this case), bunzip2 the file, and then import it:
$ mysql -p -u root < 20120912-midsync-adam.sql
Because we included Master Data, this will automatically set the master position to be correct to resume the slave... but don't do it yet.

6. Rinse, repeat

We're going to continue around the circle making backups of each, passing them onto the next and then restoring. in our case, we need a fresh backup of Ben, same way we did the original Adam backup. Transfer that one to Charlie. bunzip2 it, and import it with the MySQL command. If you have more nodes, you'll continue this around until all have fresh restores.
If you have multiple primary or online nodes, you'll want to 'start slave;' right after you import the backup. This will make sure they don't have stale data or data that will conflict with another node's data.
For us, we're going to take advantage of the fact that Charlie is in our DR site and thus doesn't have much written to it, but still we'll need to do this quickly.

7. The Last Hop

Okay, you've got all of the up to date, but we haven't made that last hop where 0 pulls from N. Instead of the normal restore process, we're going to switch it up a bit:
$ ls
20120912-BenBackup.sql.bz2
$ bunzip 20120912-BenBackup.sql.bz2
$ echo "reset master;" >> 20120912-BenBackup.sql
$ mysqp -u root -p < 20120912-BenBackup.sql
Adding on 'reset master' will reset the master files and counter. Now, bring up both this node's terminal and Adam's terminal up next to each other. Login to Adam's MySQL prompt and type in "CHANGE MASTER TO MASTER_LOG_FILE='', MASTER_LOG_POS=###;", but don't hit enter. On Charlie, at the MySQL prompt, type in 'show master status;' and hit enter. Now, copy the file name in the first data column over to your command on Adam. Then, move your cursor to the ###'s, and run the 'show master status' command again copying the number over and hitting enter as quick as you can accurately.

8. Gentlemen, start your slaves!

Once you have it set, run "start slave" on Adam, then on Charlie, and finally on Ben. It's important that this number matches.

9. Check your status

When the slave is started, it pulls any changes since the backups' restore (or master reset) from the node behind it (Charlie pulls from Ben, for instance). By working backward, we have a better chance of the replication circle staying in sync and once they're all up, you're almost done. Give it a minute and run "Show Slave Status\G" on each. You should see something similar to this:
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.57
                  Master_User: repluser
                  Master_Port: 3306
                Connect_Retry: 10
              Master_Log_File: ben-bin.000022
          Read_Master_Log_Pos: 888045912
               Relay_Log_File: charlie-relay-bin.000002
                Relay_Log_Pos: 81957
        Relay_Master_Log_File: ben-bin.000022
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 888045912
              Relay_Log_Space: 82121
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 2
1 row in set (0.00 sec)