2015-12-16

Pulling Single DB from Mysqldump's --all-databases backup files

When backing up a MySQL DB, it's often times quick and easy to do a simple:
mysqldump --all-databases --single-transaction | bzip2 > <filename>.sql.bz2

But then someone comes along and asks you "hey, you know that one blog out of 50 that you host? Well, I messed it up and need a restore." You don't want to restore the full dump, but you know you have the data. Here's a quick way to rip out of that file, exactly what you need (after, of course, you extract it from whatever compression you use):

#!/bin/bash
head -n40 $2 | sed -n '/^-- MySQL/,/^-- Current Database:/p'|grep SET > $1-dump.sql
sed -n '/^-- Current Database: `$1`/,/^-- Current Database: `/p' $2 >> $1-dump.sql
$1 = DB Requested
$2 = Input Filename
That script will leave you with a file that has the SET lines before and after the dump, as well as the contents of just the DB you're looking for. Then, just run 'mysql [database_name] < [file_name]' and you'll have your data back to what they wanted!

2015-12-10

How to add custom file extension support in Visual Studio Code for Linux

Microsoft has released Visual Studio Code for Linux (and OSX and Windows) for free (https://code.visualstudio.com/). It's not a bad editor over-all, but I had one thing that urked me. I do a lot of PHP work here and sometimes the include files have the extension .inc instead of .php ... Well, VSCode doesn't seem to think those are php and therefore doesn't highlight them. You'd think it'd read that from the <?php at the top, or infer it from the format, or have some sort of menu to tell it 'treat this as php', but it doesn't. However, we can fix that, and NOT just for PHP!

Open up your favorite text editor and go to your VSCode install directory. Doing an 'ls' should look something like this:
$ ls
Code            content_shell.pak  icudtl.dat        libgcrypt.so.11  libnode.so  
libnotify.so.4  locales            natives_blob.bin  resources        snapshot_blob.bin

Now, 'cd' into your resources/app/extensions/ directory and you'll see all sorts of extensions. For me, it looks a little something like this:
$ ls
bat            ini            perl               theme-monokai
clojure        jade           php                theme-monokai-dimmed
coffeescript   java           powershell         theme-quietlight
cpp            javascript     python             theme-red
csharp         less           r                  theme-solarized-dark
csharp-o       lib.core.d.ts  ruby               theme-solarized-light
css            lua            rust               theme-tomorrow-night-blue
declares.d.ts  make           shaderlab          tsconfig.json
docker         markdown       shellscript        typescript
fsharp         mono-debug     sql                vb
go             node-debug     swift              vscode-api-tests
groovy         node.d.ts      theme-abyss        xml
html           objective-c    theme-kimbie-dark  yaml

Now, most of those are directories, and they have files under them... the one we care about is /package.json ... Open that up in your favorite text editor (like vim) and follow the json object looking for the line under the node "contributes"->"languages" called "extensions"... For instance, Python says:
"extensions" : [ ".py", ".rpy", ".pyw", ".cpy", ".gyp", ".gypi" ], 
php says:
"extensions": [ ".php", ".phtml", ".ctp" ],

etc... Just add in your desired extension (for me, I added the text ',".inc" ') in the object and save it. Restart VSCode and you've got nicely syntax highlighted text!

Update!

As of VSCode 1.1.0+, you can actually add these things in your settings.json. To get there, go to "Preferences->User Settings". It'll open up a big (or small) json object it reads in for your settings. You'll want to add the lines in:

{
  ...
  "files.associations": {
        "*.inc": "php",
    }
}

Save it and you're good to go! (you may need to restart VSCode)

2015-10-05

Rescanning for drive changes in CentOS and Ubuntu

As we now move to a world where most things are virutalized, I've found it repeatedly useful to change the size of a Virtual Hard Drive in VMWare and have it show up in the Virtualized OS as the updated size. I've also found that Ubuntu does this slightly differently than CentOS... but in either case it works:

Ubuntu

$ echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan

CentOS

$ echo "1" > /sys/bus/scsi/devices/2\:0\:0\:0/rescan 

Either way, they'll both give you some great data in the dmesg log:
$ dmesg|tail
sd 2:0:0:0: [sda] 157286400 512-byte logical blocks: (80.5 GB/75.0 GiB)
sd 2:0:0:0: [sda] Cache data unavailable
sd 2:0:0:0: [sda] Assuming drive cache: write through
sda: detected capacity change from 26843545600 to 80530636800

You'll want to make sure you have partprobe installed. It seems to come with the default Ubuntu Install on v14.04+... in CentOS you'll have to "yum install parted" to get it. Simply make your fdisk changes and then run partprobe /dev/, in the above case, /dev/sda.

Happy Re-Scanning!

2015-09-24

Convert Unix Timestamps to useful Date Stamps from a pipe

I've found numerous times that I run into files that are in a format like:
[1443094172] [29058] SERVICE ALERT: imgserver03;Root File System;UNKNOWN;SOFT;1;ERROR: hrStorageDescr Table : No response from remote host '192.168.90.64'.
[1443094462] [29058] SERVICE ALERT: imgserver03;Root File System;OK;SOFT;2;Disk OK - / TOTAL: 15.748 Go USED: 19% : 3.019 Go

While I'm not that bad with reading timestamps... it takes some temporal awareness and when it's 6am and I haven't had my coffee yet, that's just too much cognitive power for me to conjure up. So I made the following script that takes info in from a pipe, looks for timestamps, and converts them over.


Save that in a file, chmod the file +x, and do something like:
cat mylogfile.log|./t2d

You can stick greps and such in there too. Just as long as the timestamp doesn't get corrupted, From the above output, I get:
$ cat mylogfile.log|./t2d
2015-09-24 07:29:32 [29058] SERVICE ALERT: dsfrcimg03;Root File System;UNKNOWN;SOFT;1;ERROR: hrStorageDescr Table : No response from remote host '10.93.90.64'.
2015-09-24 07:34:22 [29058] SERVICE ALERT: dsfrcimg03;Root File System;OK;SOFT;2;Disk OK - / TOTAL: 15.748 Go USED: 19% : 3.019 Go

I've also added this script in a public Gist on github.com: https://gist.github.com/thejml/7af9f4b56ef9b11e612f

2015-09-16

Replace OSX Yosemite "Photos" App with Something Better

I'm the proud owner of a fairly ancient Canon EOS Digital Rebel XT... I've had it for near 9 years now, and it's still working fairly well... but after moving to OS X I've found that the handling of the camera is less than optimal. Now, Keep in mind that the XT (and may of it's sisters) use CF cards. There's no CF Slot on a modern Mac (or any other computer for that matter), so short of getting and carrying around a CF->USB Adapter, I use the USB Mini cable that came with the camera. It's not failed me yet, and it continues to transfer well...

Until you plug it in to OS X Yosemite, when you get quickly acquainted with "Photos". Now, I've had my Mac a bit and that included using "iPhoto" prior to "Photos", but while I like iPhotos better in some ways and Photos in others, I've consistently thought "Why can't I just browse this camera in the finder?" or even "Why can't I just tell it to dump the photos on this camera to a folder so I can look through them... do they have to be in an Album?".

You see, the problem I've found with both Photos and iPhoto, is that it wants to manage and hoard ALL my photos. While this is a worthy goal, one that many other photo managers have attempted, it paints me into a corner and severely limits me for a few reasons:
  • I have about 300GB of Photos, so far. I have a late 2013 MBP Retina with a 256GB SSD
  • I'm only going to get more photos and those photos are only going to get bigger with higher megapixel cameras
  • I'd like to not be tied to a specific platform of organization, lest I decide to migrate later
  • I have a NAS on which I store my photos, and have configured offsite backup for multiple directories, both to another NAS at a relative's house, and to AWS S3 Glacier.
  • iPhoto and Photos don't really like operating over a network FS mount
  • If I import 8GB of photos, delete 6GB of the ones I didn't like (yeah, that happens sometimes), and check my free space, it still has 8GB used. In fact, neither of these programs really like to free space too much. Obviously it's so you can Undelete. You can also do cool editing in there, and they've got great Adobe Aperture fighting abilities. And it keeps copies of the original and each copy you make so you can revert and such. All this uses lots of disk space, and well, there's that darn 256GB SSD.
  • The put the photos in hard to find and browse to areas on disk
  • The "Show in Finder" and "Show Original in Finder" options for some reason, don't always work?!

Anyway, I could probably keep going. I'm not saying they're bad, but they're not really good either. Now, Maybe there's something good in the app store, though a quick glance didn't turn anything up... but really I just want my photos as flat files in a finder like any other normal OS. Well, after much googling I found this and decided to write it down in an easy to look up blog post.

  1. Connect camera
  2. Get out of that darn Photos/iPhoto app
  3. Hit ⌘+SpaceBar to bring up Spotlight, type "Image Capture" and Hit Enter when it finds it
  4. You should now see your camera listed under Devices, your photos on the right, and the ability to select them individually, or leave them all unselected and hit "Import All" at the bottom after changing the drop down to where you like.

  5. Most importantly, however, you can click the little icon in the bottom left that looks like a triangle in a window and find a hidden gem in the corner letting you define what happens when you connect this camera in the future. "Image Capture" means it will open this.


And there you go, you have now made this simple program, or another program of your choice come up. I've stuck with "Image Capture", as I've found that being able to say "Import to Folder", is basically what I wanted. I can then go through and work with those photos. In fact, if I'm not going to delete them from the camera, I can even exclude that from "Time Machine" so there is no trail if I decide they all suck.

2015-06-08

Writing Upstart Scripts

Upstart scripts are a great way to deal with starting and stopping system daemons in Ubuntu and CentOS 6, as well as many other flavors of Linux. While later replaced by systemd, Upstart allows quite a bit of customization when creating init scripts, without as much of the hassle and bash programming knowledge required with init.d System V scripts. In this tutorial, I'll show you how to create one for the Kibana 4.x executable distributed by Elastic.co the makers of ElasticSearch and LogStash. If you haven't messed with Kibana I highly recommend it along side LogStash and fluentd, but that's another tutorial. Any executable could be substituted because Kibana itself is simply a binary distribution.

There are many extra potental pieces in upstart scripts, but we're going to start with what we need for a simple start/stop script which will run upon system boot and shutdown properly on the flip side. All Upstart init scripts live in the /etc/init/ directory and end with '.conf'. This one is simply "kibana.conf". Consider the following config:
# cat /etc/init/kibana.conf
description 'Kibana Service startup'
author 'Joe Legeckis'
env NAME='kibana'
env LOG_FILE=/var/log/kibana.log
env USER=nginx
env BASEDIR=/var/www/kibana/
env BIN=/var/www/kibana/bin/kibana
env PARAMS=''
start on started network
stop on stopping network
# Respawn in case of a crash, with default parameters
respawn
script
 # Make sure logfile exists and can be written by the user we drop privileges to
 touch $LOG_FILE
 chown $USER:$USER $LOG_FILE
 date >> $LOG_FILE
 cd $BASEDIR
 exec su -s /bin/sh -c 'exec "$0" "$@"' $USER -- $BIN $PARAMS >> $LOG_FILE 2>&1
end script

post-start script
 echo "app $NAME post-start event" >> $LOG_FILE
end script

It starts off with a simple human readable information about the job we're defining. The Description and Author. After that, you'll see there are a number of Environment Variables defined as "env =". These are sometimes used by the script/binary itself, and other times, as seen here, used by the upstart script later on to execute said script/binary. In this case, we're defining the Name, Log File, User to run as, BaseDir where Kibana lives, the binary executed to run Kibana and any additional Parameters.

Next, we describe when it should start and stop. In this case, and in most cases, the following two lines will be accurate, but more detail can be provided if you need to chain things together, or make sure that something stops or starts according to another scripts timing. Here we're simply starting up after network starts successfully, and stopping prior to stopping network.

We then define what should happen with a crash by simply specifying that the script should respond. Importantly, there is a default limit which varies from system to system, to determine how many times and if any delay is enforced prior to responding a failed or prematurely exited script. 'respawn' does not check the output value of the script. If you didn't ask the script to stop, then it will run the respawn process and start it back up! It may be prudent to add a line after this:
respawn limit COUNT INTERVAL | unlimited

i.e.
reswpan limit 5 10
The example line will restart the process up to 5 times, with 10s in-between the failure and the next execution. This will help prevent race conditions. Specifying 0 (as in zero) for the count will restart it an unlimited number of times.

And now we get into the meat of the script. Between "script" and "end script" is where you put exactly what you want to run every time the daemon is to start up. In this case, we're going to make sure our log file exists, has the right permissions and gets a fresh datestamp (handy when it dies and you're not sure when it started back up). Then we change directory and run our executable. Using the previously defined variables to fill out the command it executes as the correct user, directing all it's output to the log file, including standard out, just in case. After that we close it up with the "end script" and move on!

"post-start" is simply what happens after starting and before exiting upstart. You can use it for all sorts of things like tests or initial setup checks if need be, here we're just adding a line to the log file.

Now, as long as you've put this script in your /etc/init/ directory (NOT /etc/init.d/) as 'name.conf' or in our case here, 'kibana.conf', you can execute it by running "initctl start name". Stop should work, as well as restart!

2015-06-05

Resizing Online Disks in Linux with LVM and No Reboots

When we set this up, we only had a 16GB primary disk... Plenty of space. Until you start to write lots of logs and data... and then it fills up quick. So let's talk about how to resize an LVM based partition on a live server without reboots... Reboots are for Windows! This system is a CentOS 6.x machine running in VMWare 5.x that's currently got a 16GiB VMDK based drive. Let's see what we've got to work with:
$ df -h
Filesystem                                         Size  Used Avail Use% Mounted on
/dev/mapper/myfulldisk--vg-root                     12G   11G     0 100% /
none                                               4.0K     0  4.0K   0% /sys/fs/cgroup
udev                                               7.4G  4.0K  7.4G   1% /dev
tmpfs                                              1.5G  572K  1.5G   1% /run
none                                               5.0M     0  5.0M   0% /run/lock
none                                               7.4G     0  7.4G   0% /run/shm
none                                               100M     0  100M   0% /run/user
/dev/sda1                                          236M   37M  187M  17% /boot
/dev/sdc1                                          246G   44G  190G  19% /data

Hmm... time to get on this then. Now, luckily we're running in VMWare. A quick edit to our VM to enlarge the VMDK (not covered in this how-to) will fix this... First, what device are we talking about?

$ dmesg|grep sd
[    1.562363] sd 2:0:0:0: Attached scsi generic sg1 type 0
[    1.562384] sd 2:0:0:0: [sda] 33554432 512-byte logical blocks: (17.1 GB/16.0 GiB)
[    1.562425] sd 2:0:0:0: [sda] Write Protect is off
[    1.562426] sd 2:0:0:0: [sda] Mode Sense: 61 00 00 00
[    1.562460] sd 2:0:0:0: [sda] Cache data unavailable
[    1.562461] sd 2:0:0:0: [sda] Assuming drive cache: write through
[    1.563331] sd 2:0:0:0: [sda] Cache data unavailable
[    1.563451] sd 2:0:1:0: Attached scsi generic sg2 type 0
[    1.563452] sd 2:0:1:0: [sdb] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)
[    1.563479] sd 2:0:1:0: [sdb] Write Protect is off
[    1.563481] sd 2:0:1:0: [sdb] Mode Sense: 61 00 00 00
[    1.563507] sd 2:0:1:0: [sdb] Cache data unavailable
[    1.563508] sd 2:0:1:0: [sdb] Assuming drive cache: write through
[    1.563755] sd 2:0:2:0: Attached scsi generic sg3 type 0
[    1.563881] sd 2:0:2:0: [sdc] 524288000 512-byte logical blocks: (268 GB/250 GiB)
[    1.563942] sd 2:0:2:0: [sdc] Write Protect is off
[    1.563944] sd 2:0:2:0: [sdc] Mode Sense: 61 00 00 00
[    1.564008] sd 2:0:2:0: [sdc] Cache data unavailable
[    1.564010] sd 2:0:2:0: [sdc] Assuming drive cache: write through
[    1.564282] sd 2:0:2:0: [sdc] Cache data unavailable
[    1.564283] sd 2:0:2:0: [sdc] Assuming drive cache: write through
[    1.564360] sd 2:0:1:0: [sdb] Cache data unavailable
[    1.564362] sd 2:0:1:0: [sdb] Assuming drive cache: write through
[    1.564989] sd 2:0:0:0: [sda] Assuming drive cache: write through
[    1.571010]  sdb: sdb1
[    1.571426] sd 2:0:1:0: [sdb] Cache data unavailable
[    1.571514] sd 2:0:1:0: [sdb] Assuming drive cache: write through
[    1.571626] sd 2:0:1:0: [sdb] Attached SCSI disk
[    1.574181]  sda: sda1 sda2 < sda5 >
[    1.574797] sd 2:0:0:0: [sda] Cache data unavailable
[    1.574888] sd 2:0:0:0: [sda] Assuming drive cache: write through
[    1.575003] sd 2:0:0:0: [sda] Attached SCSI disk
[    1.579250]  sdc: sdc1
[    1.579805] sd 2:0:2:0: [sdc] Cache data unavailable
[    1.579944] sd 2:0:2:0: [sdc] Assuming drive cache: write through
[    1.580141] sd 2:0:2:0: [sdc] Attached SCSI disk
[    6.922330] Adding 4193276k swap on /dev/sdb1.  Priority:-1 extents:1 across:4193276k FS
[    7.137134] EXT4-fs (sda1): mounting ext2 file system using the ext4 subsystem
[    7.142419] EXT4-fs (sda1): mounted filesystem without journal. Opts: (null)
[    7.218150] EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: (null)
[    7.384566] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).

The first one is the 16GB drive in question. Take the number on that line and use it in the next step:

$ echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
$ dmesg |tail
[1918441.322362] sd 2:0:0:0: [sda] 209715200 512-byte logical blocks: (107 GB/100 GiB)
[1918441.322596] sd 2:0:0:0: [sda] Cache data unavailable
[1918441.330685] sd 2:0:0:0: [sda] Assuming drive cache: write through
[1918441.489622] sda: detected capacity change from 17179869184 to 1073741824

So, that's good, it sees our increased size. Now, lets enlarge that Volume Group. First we get info about the volume group.

$ vgdisplay
  --- Volume group ---
  VG Name               myfulldisk-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               15.76 GiB
  PE Size               4.00 MiB
  Total PE              4034
  Alloc PE / Size       4028 / 15.73 GiB
  Free  PE / Size       6 / 24.00 MiB
  VG UUID               dv3URd-EVvz-oTwY-WiDW-RPt1-4rbD-FnPxxM

That '6' is only 24MiB. It doesn't see our new space yet. In order to get it to, we need to make a new partition of the right type, then add it to the volume group. We'll then end up with more Free PE's. Here we go:
$ fdisk /dev/sda
Command (m for help): p

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ade37

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758    33552383    16525313    5  Extended
/dev/sda5          501760    33552383    16525312   8e  Linux LVM

Command (m for help): n
Partition type:
   p   primary (1 primary, 1 extended, 2 free)
   l   logical (numbered from 5)
Select (default p): p
Partition number (1-4, default 3): 3
First sector (499712-209715199, default 499712): 33552384
Last sector, +sectors or +size{K,M,G} (33552384-209715199, default 209715199): 
Using default value 209715199

Command (m for help): t
Partition number (1-5): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

At this point you could reboot... but we're not going to. Even though this is our root drive which makes this a little trickier, it's nothing we can't fix:
$ partprobe /dev/sda

Hopefully, partprobe has found your new partition for you and enlightened the kernel with it's wisdom (or at least fresh load of zeros). Now we need to make it an available volume to use for expanding the disk. This consists of making it a 'Physical volume', and then adding that physical volume to the Volume Group containing the disk we want to expand.

$ pvcreate /dev/sda3
  Physical volume "/dev/sda3" successfully created
$ pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               myfulldisk-vg
  PV Size               15.76 GiB / not usable 2.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              4034
  Free PE               6
  Allocated PE          4028
  PV UUID               a3mhvZ-ogyk-ao4y-2JSM-KVfL-i9no-q0LAUk
   
  "/dev/sda3" is a new physical volume of "84.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sda3
  VG Name               
  PV Size               84.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               IpyjOU-1GDy-bTLL-U9kE-iSGP-BYg1-a25LIm
   
$ vgextend /dev/myfulldisk-vg /dev/sda3
  Volume group "myfulldisk-vg" successfully extended
$ vgdisplay
  --- Volume group ---
  VG Name               myfulldisk-vg
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               99.76 GiB
  PE Size               4.00 MiB
  Total PE              25538
  Alloc PE / Size       4028 / 15.73 GiB
  Free  PE / Size       21510 / 84.02 GiB
  VG UUID               dv3URd-EVvz-oTwY-WiDW-RPt1-4rbD-FnPxxM
Awesome, we now have 21510 free PE's that we can use... That's, apparently, 84.02GB in this case. Next up, we'll need to know what portion of the VG we need to extend. Looking back up at a 'df' output, and knowing our system, it says "root" in there. doing a quick ls of /dev/myfulldisk-vg/ shows that there's only really a choice between root" and "swap". So, knowing it's root we move on with:
$ lvextend -L95G /dev/myfulldisk-vg/root
  Extending logical volume root to 95.00 GiB
  Logical volume root successfully resized
$ df -h
Filesystem                                         Size  Used Avail Use% Mounted on
/dev/mapper/myfulldisk--vg-root                     12G   11G  114M  99% /
none                                               4.0K     0  4.0K   0% /sys/fs/cgroup
udev                                               7.4G  4.0K  7.4G   1% /dev
tmpfs                                              1.5G  576K  1.5G   1% /run
none                                               5.0M     0  5.0M   0% /run/lock
none                                               7.4G     0  7.4G   0% /run/shm
none                                               100M     0  100M   0% /run/user
/dev/sda1                                          236M   37M  187M  17% /boot
/dev/sdc1                                          246G   44G  190G  19% /data
Okay, the VG might be bigger, but no one else knows that. Because the filesystem ON the VG is still the same size, luckily there's a command for that too!
$ resize2fs /dev/myfulldisk-vg/root
resize2fs 1.42.9 (4-Feb-2014)
Filesystem at /dev/myfulldisk-vg/root is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 6
The filesystem on /dev/myfulldisk-vg/root is now 24903680 blocks long.

$ df -h
Filesystem                                         Size  Used Avail Use% Mounted on
/dev/mapper/myfulldisk--vg-root                     94G   11G   79G  12% /
none                                               4.0K     0  4.0K   0% /sys/fs/cgroup
udev                                               7.4G  4.0K  7.4G   1% /dev
tmpfs                                              1.5G  576K  1.5G   1% /run
none                                               5.0M     0  5.0M   0% /run/lock
none                                               7.4G     0  7.4G   0% /run/shm
none                                               100M     0  100M   0% /run/user
/dev/sda1                                          236M   37M  187M  17% /boot
/dev/sdc1                                          246G   44G  190G  19% /data


Ha, there we go! 79GB available, enjoy!

2015-02-05

Using Docker Behind a Network Proxy

I was working on spinning up some Docker containers (if you haven't heard of Docker, I highly recommend it) and found some difficulty getting it to pull images from behind a proxy. You can manually export the http_proxy and https_proxy settings prior to execution or fire up the Docker service from a script that exports those prior to running the Docker service, but those are both hack-like to me, so I found a better way.

On CentOS (in my case 6.5), simple edit the file docker-network in /etc/sysconfig and add the lines:

export HTTP_PROXY=http://proxy.mynetwork.net:80/
export HTTPS_PROXY=http://proxy.mynetwork.net:80/

Then restart the docker service with service docker restart and you should be able to pull down images!

To go along with that, you should be able to include username/password for proxies as you normally would, "http://<user>:<password>@<host>:<port>"

2015-01-02

Log Rotation to NFS Mount with Logrotate.d

Logrotate.d is one of the most flexible log rotation programs for linux servers. I recently was required to rotate a lot of large files daily from the server to a central NFS Mount and wanted to share just how easy this is to accomplish.

As background, this app is a Node.js application that uses Bunyon for a logging output plugin. Luckily, it accepts a USR2 signal to tell it "hey, I just rotated your log file, make a new one and keep on truckin'". See, the first issue is that moving a file that a process is writing to, simply renames where the inode is pointing, so your process will continue writing to the new filename as if nothing happened. Telling it to make a new file is key to rotation.

The following code block is the entirety of my logrotate.d script. Placed in the /etc/logrotate.d/ folder, it will be read in and processed every night. I've used the date command to create sub dirs per month to keep things organized in the long term, and the hostname to put them each in their own directories per hostname.

/var/log/myApp/*log {
    daily
    dateext
    rotate 7
    missingok
    compress
    sharedscripts
    postrotate
        [ ! -f /var/run/myApp.pid ] || kill -USR2 `cat /var/run/myApp.pid`
    endscript
    lastaction
 mkdir -p /logsMount/myApp/`hostname`/`date +%Y\/%m`/
 mv /var/log/myApp/*gz /logsMount/myApp/`hostname`/`date +%Y\/%m`/ 
    endscript
}

As this reads, top down: For all files ending in "log" in the /var/log/myApp directory, on a daily basis, throw a date extension on them (-20150102 for instance. Not putting 'dateext' will cause it to simply number them .1, .2, etc.), keep the last 7 days (useful if the nfs mount goes down). If the files are missing don't throw an error. Compress the files (with gzip by default). As your Post Rotate command, send a signal to the process ID noted in the /var/run/myApp.pid file to tell that process to regenerate it's logs. Then, when that's done, make sure there's a directory on the target nfs mount and move all .gz files to it.

It's important to note that the 'postrotate' commans occur AFTER file rotation, but PRIOR to compression. the 'lastaction' section occurs as the last thing it does in this script, i.e. after all other things are done including compression.

This will move things like: /var/log/myApp/myLogFile.log -> /logsMount/myApp/myHost/2015/01/myLogFile.log-20150102.gz

Hope this helps someone out!