Showing posts with label Debian. Show all posts
Showing posts with label Debian. Show all posts

2016-12-21

Keeping a process running with Flock and Cron

We've got a few processes here that aren't system services, but need to be running in the background and should be restarted if they die. However, this method can also be used for a cron that often runs past it's normal re-execution time (say a every 5 min cron that sometimes runs for 7min). It will prevent multiple executions from running simultaneously.

First off, in your crontab, you can add a line like this:

* * * * * flock -x -n /tmp/awesomenessRunning.lock -c "/usr/local/bin/myAwesomeScript.sh" >/dev/null 2>&1

What happens here is fairly straight forward:
  • Every minute, flock executes your script in this case "/usr/local/bin/myAwesomeScript.sh"
  • flock opens up an exclusive write lock on the lock file, here named "/tmp/awesomenessRunning.lock". When it's done executing, it'll release the lock.
  • The next time this cron runs, flock will attempt to again, get an exclusive lock on that lock file... but it can't if that script is still running, so it'll give up and try again next time the cron runs.

Now, generally, if I'm doing this as a systems level item, I'll put the following in a file named for the job or what the job is doing and drop it in /etc/cron.d/. All the files there will get compiled together into the system cron, which helps other admins (or your later self) to find and disable it later. If you do that, remember to stick the user to execute the cron as between the *'s and the flock!

2016-05-03

Add Proxy to apt-get on Ubuntu

Many times for security and network topology reasons, I've had to deal with hosts being behind proxies. This is generally fairly easy to work with. Before you make a call out, you can run:

$ export http_proxy="http://username:password@proxy:port/"; $ export https_proxy="http://username:password@proxy:port/";

Then run your normal commands and MOST things will pay attention to the environmental variables. Heck, you can even put it in your .bashrc or whatever else you automatically load into your environment on login.

But that doesn't work for other people... or automated processes. So every Cron you have has to go through that process. Many of those are unique and have their own settings files to reflect the proxy, and apt is no different. Apt is, however, easy to configure to use a proxy:

$ sudo vi /etc/apt/apt.conf

It will prompt you for your sudo password (if you're not already root). After that, you'll be editing the apt.conf file and if yours is anything like mine, it's empty. If it isn't empty, make sure you're not duplicating the info we're putting in there. Then enter in the following config lines, substituting your own info in. If your proxy doesn't have a username/password, you can skip the italic 'username:password@' section.

Acquire::http::proxy "http://username:password@proxy.example.com:port/"; Acquire::https::proxy "https://username:password@proxy.example.com:port/";

Save it, and make sure to run an 'apt-get update' to get the latest package lists and such. You should notice that it rolls right through them now that your system is able to talk out to the internet.

2014-01-03

WordPress asks for FTP Credentials

Being a modern Systems Administrator, I'm sometimes asked to manage things that throw me for a loop. WordPress is one of those things. It's both really simple and really complex, and sometimes not direct with it's response to problems. I've noticed with a fresh WordPress install that when my users wanted to upload a new theme, they were presented with a normal 'upload' link. Click this button, browse to your file, hit okay, then hit upload. All well and good. But then it prompted them for FTP or SFTP credentials. No error message, no reason why FTP would be needed in light of the previous, seemingly successful upload. We don't run FTP here in relation to WordPress, nor would I want to add that complexity to the setup or allow another potential access point for an attacker.

After digging a bit, the reason came down to my being a bit too secure and clamping down the permissions for my web server user, on the WordPress files, too far. I found the following issues during troubleshooting:

  • The default install from a tarball doesn't make the wp-content/uploads directory. You've got to make it yourself.
  • The uploads dir must allow writes (for obvious reason) by apache or www-user or whoever your web user is.
  • The target of your upload has to go somewhere... Theme uploads need apache to be able to write to wp-content/themes/, upgrades wp-content/upgrades/, etc.

Fixing these issues made the FTP prompt stop showing up, and we were good to go.

2013-12-10

Too many authentication failures for

Lately I've been getting this lovely error when trying to ssh to certain hosts (not all, of course):

# ssh ssh.example.com
Received disconnect from 192.168.1.205: 2: Too many authentication failures for 

My first thought is "But you didn't even ASK me for a password!" My second thought is "And you're supposed to be using ssh keys anyway!"

So, I decide I need to specify a specific key to use on the command line with the -i option.

# ssh ssh.example.com -i myAwesomeKey
Received disconnect from 192.168.1.205: 2: Too many authentication failures for 

Well, that didn't help. Adding a -v shows that it tried a lot of keys... including the one I asked it to. Now, apparently this is the crux of the issue. You see, it looks through the config file (of which mine is fairly extensive as I deal with a few hundred hosts, most of which share a subset of keys, but not all of them). Apparently it doesn't always necessarily try the key I specified FIRST. So, if you have more than, say 5 keys defined, it may not necessarily use the key you want it to use first, it will offer anything from the config file. Yes, even if you have them defined per host. For instance, my config file goes something like this:

Host src.example.com
 User frank.user
 Compression yes
 CompressionLevel 9
 IdentityFile /home/username/.ssh/internal

Host puppet.example.com
 User john.doe
 Compression yes
 CompressionLevel 9
 IdentityFile /home/username/.ssh/jdoe


Apparently, this means ssh will try both of these keys for any host that isn't those two. If the third one you define, "Host ssh.example.com" in our case, is the one you want, it'll do that one THIRD, even though the host entry line matches. The fix is simple: Tack "IdentitiesOnly yes" in there. It tells ssh to apply ONLY the IdentityFile entries having to do with that host TO that host.

Host src.example.com
 User frank.user
 Compression yes
 CompressionLevel 9
        IdentitiesOnly yes
 IdentityFile /home/username/.ssh/internal

The side effect of this is that you don't have to define an IdentityFile line for EVERY HOST. It will apply all the keys it knows about to all of the Host entries in the config, and indeed to every ssh you attempt, listed or not. This is why it didn't always fail, there was a good chance the first one or two in the list worked. It was only when the first 5 it tried didn't work that it failed.