i’ve been taking a proactive stance in checking the mail queue in my office, since if it gets cluttered with newsletters or unnecessary stuff (including the occasional password phishing from code vulnerabilities in contact forms), it ends up slowing down other emails significantly.
by default, the qmail queue is 7 days long (604800 seconds). to check that, you can run the following:
# qmail-showctl | grep queue
queuelifetime: (Default.) Message lifetime in the queue is 604800 seconds.
(side point: there’s a lot of cool stuff you can see there related to the qmail setup if you don’t only grep for the queue.)
in my opinion, 7 days is just way too long. sometimes i’m checking the queue and an email is mailed to a wrong address… and the email just sits there while the mailserver repeatedly attempts to send the message to this nonexistent address. (for example, if you’re looking to email someguy@aol.com and you accidentally addressed it with the domain aol.org, you’ll be waiting a long time for a bounceback, which might cause frustration and anger because you thought you sent it to the right guy to begin with.)
everything on linux can be tweaked, and it’s relatively easy to do at times. in this particular case, what is needed is a newly created file, /var/qmail/control/queuelifetime, which contains a single line: the number of seconds that you want the queue to last. in my case, i made it 172800 seconds (2 full days; a single day is 86400), so these emails get returned to sender informing them that they should get the right address or try later.
once you run this file, you can verify that the new queue length is in effect by running the following:
# /var/qmail/bin/qmail-showctl | grep queue
queuelifetime: Message lifetime in the queue is 172800 seconds.
note how it doesn’t say “Default” anymore like the previous execution of the same command did.
to force those old emails to be sent? just run qmHandle -a and you’ll notice that the queue (qmHandle -l) has gotten a lot shorter.
if you don’t have qmHandle, you can get it on sourceforge; just click here. it’s not part of the regular qmail distribution. more information on qmHandle can be found in this blog entry.
qmail: 7 day mail queues? too long.
robots.txt and spidering.
when you have content that is not for public consumption, you should always be safe than sorry by preventing the search engines from crawling (or spidering) the page and learning your link structure. for example, in a development environment, it would hardly be useful for the page to be viewed as if it’s a public site when it’s not ready yet.
enter robots.txt. this file is extremely important; search engines look for that file and determine whether the site can be entered into its search cache or if you want to keep it private.
the basic robots.txt file works like this: you stick the file in the root of your website (e.g. the public_html or httpdocs folder. it won’t work if it’s located anywhere else or in a subdirectory of the site.
the crux of the robots.txt is the User-Agent and disallow directives. if you don’t want any search engine bots to spider your any files on your site, the basic file looks like this:
User-agent: *
Disallow: /
however, if you don’t want the search engines to crawl a specific folder, e.g. www.yoursite.com/private, you would create the file as so:
User-agent: *
Disallow: /private/
if you don’t want google to spider a specific folder called /newsletters/, then you would use the following:
User-agent: googlebot
Disallow: /newsletters/
there are hundreds of bots that you’d need to consider, but the main ones are probably google (googlebot), yahoo (yahoo-slurp), and msn (msnbot).
you can also target multiple user-agents in a robots.txt file that looks like this:
User-agent: *
Disallow: /
User-agent: googlebot
Disallow: /cgi-bin/
Disallow: /private/
there’s a great reference on user agents on wikipedia. another great resource is this robots.txt file generator.
where security is concerned, a robots.txt file makes a huge difference.
showing and understanding mysql processes in detail.
i’ve learned a little trick on how to determine how your mysql server is running and where to pinpoint problems in the event of a heavy load. this is useful in determining how you might want to proceed in terms of mysql optimization.
# mysql -u [adminuser] -p
mysql> show processlist;
granted, on a server with heavy volume, you might see hundreds of rows and it will scroll off the screen. here are the key elements to the processlist table: Id, User, Host, db, Command, Time, State, Info, where:
Id is the connection identifier
User is the mysql user who issued the statement
Host is the hostname of the client issuing the statement. this will be localhost in almost all cases unless you are executing commands on a remote server.
db is the database being used for the particular mysql statement or query.
Command can be one of many different commands issued in the particular query. the most common occurrence on a webserver is “Sleep,” which means that the particular database connection is waiting for new directions or a new statement.
Time is the delay between the original time of execution of the statement and the time the processlist is viewed
State is an action, event, or state of the specific mysql command and can be one of hundreds of different values.
Info will show the actual statement being run in that instance
another useful command is:
mysql> show full processlist;
which is equivalent to:
mysqladmin -u [adminuser] -p processlist;
this shows my specific query as:
| 4342233 | adminusername | localhost | NULL | Query | 0 | NULL | show full processlist |
or you can display each field in a row format (vertical format), like so, simply by appending \G to the end of the query:
mysql> show full processlist\G
this list is very likely preferable in the event that your data scrolls off the screen and you want to find out the specific field name of a value in your database.
******** 55. row ********
Id: 4342233
User: adminusername
Host: localhost
db: NULL
Command: Query
Time: 0
State: NULL
Info: show full processlist
you can also check how many mysql queries a user has open by running the following command:
mysqladmin -u [adminuser] -p pr | awk -F\| {‘print $3’} | sort -nk1 | awk -F_ {‘print $1’} |uniq -c |sort
to see which database has the most active queries, run the following:
mysqladmin -u [adminuser] -p pr | awk -F\| {‘print $3’} | sort -nk1 |uniq -c |sort
oh, and since it’s useful… here’s a recommend /etc/my.cnf:
[mysqld]
datadir=/var/lib/mysql
skip-locking
skip-innodb
safe-show-database
query_cache_limit=1M
query_cache_size=64M
query_cache_type=1
max_user_connections=100
max_connections=350
interactive_timeout=10
wait_timeout=20
connect_timeout=20
thread_cache_size=128
key_buffer=64M
join_buffer=1M
max_connect_errors=20
max_allowed_packet=16M
table_cache=1024
record_buffer=1M
sort_buffer_size=2M
read_buffer_size=2M
read_rnd_buffer_size=2M
thread_concurrency=4
myisam_sort_buffer_size=64M
server-id=1
long_query_time=2
slave_net_timeout = 50
delayed_insert_timeout = 50
[mysql.server]
user=mysql
basedir=/var/lib
[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/lib/mysql/mysql.pid
open_files_limit=8192
[mysqldump]
quick
max_allowed_packet=16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer=64M
sort_buffer=64M
read_buffer=16M
write_buffer=16M
[myisamchk]
key_buffer=64M
sort_buffer=64M
read_buffer=16M
write_buffer=16M
[mysqlhotcopy]
interactive-timeout
another fine tuning would include the following and is good for machines with plesk:
skip-locking
key_buffer = 128M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 64M
thread_concurrency = 8
the above will help you optimize your mysql database as well, but the configuration isn’t for everyone.
resolving canonical issues with plesk.
in the world of SEO (search engine optimization), there is an unwritten rule (well, it will be written sooner or later) that you can’t have duplicate content on google search engines from the same site. this means that http://www.domain.com and http://domain.com cannot both be found by search engines. you must choose one or the other or you may face a penalty.
there’s an easy solution for this using vhosts in plesk. the only not-so-user-friendly part about this that you have to do it for every domain you are worried about, and with 100+ domains, you’ll be making 100+ (or 200+ files if you have SSL support as well) vhost files for each domain.
in any event, this is how it’s done.
navigate on your plesk server to your domain’s conf directory. on some machines, it’s
# cd /var/www/vhosts/domain.com/conf
i prefer going through this shortcut:
# cd /home/httpd/vhosts/domain.com/conf
regardless, both are symbolically linked — or they should be in certain setups.
create the file vhost.conf
# vi vhost.conf
add the following to the vhost.conf file
RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}$1 [QSA,R=301,L]
for domains with SSL support, you will need to create a file called vhost_ssl.conf as well.
# vi vhost_ssl.conf
RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ https://www.%{HTTP_HOST}$1 [QSA,R=301,L]
that’s it! now, run this plesk command to process your update.
# /usr/local/psa/admin/bin/websrvmng -av
load your page in your preferred web browser as http://domain.com. it will automatically redirect to http://www.domain.com and will be reflected in search engines with the www prefix only.
finding files with specific permissions.
in may, we did a plesk migration from an older server to a newer one. it was relatively smooth, but for some reason, a lot of image permissions were not kept intact (instead being read+write only by the owner, but not the user or group).
i am beginning to learn how useful the find command is. this code snippet below enabled me to find all of the jpg images on my server that were chmodded 600 … and made them appropriately readable.
# find /home/httpd/vhosts -name ‘*.jpg’ -perm 600
but wait, you can execute commands on these results too!
# find /hme/httpd/vhosts/ -name ‘*.jpg’ -perm 600 -exec chmod 644 {} \;
good stuff!
iptables port redirection (smtp servers).
this is a quick tutorial on tables port redirection.
we have a few clients whose outgoing port 25 is blocked by their ISP due to vulnerabilities and exploits that cause this port to send out considerable amounts of spam.
this is never an obstacle for someone who hosts on a linux server with iptables, because you simply can forward traffic from port 25 to another port (e.g. 26, as in the example below):
# iptables -t nat -I PREROUTING -p tcp –dport 26 -j REDIRECT –to-ports 25
easy, isn’t it?
of course, if 26 is blocked, you can substitute another open port — but 25 must remain the same, as 25 is the standard smtp (outgoing email) port.
for someone who needs to utilize these settings, s/he would be required to go to outlook or whatever email application is being used and change the smtp port from standard 25 to 26 (in outlook, this can be done by going to tools > email accounts > view or change existing email accounts > select the questionable email account > change > more settings > advanced tab > change 25 to 26).
then, try to send out the email. it should no longer be sitting in the user’s outbox.
to verify that the change was made on the linux side, simply run:
# iptables -t nat –list
the output should show something like this:
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
REDIRECT tcp — anywhere anywhere tcp dpt:26 redir ports 25
confirmed — it is working!
make sure to save your changes so that the rules are maintained between reboots:
# /sbin/iptables-save
bcc using outlook 2003.
this is a little off-color here, given that the target audience of my blog is intended to be the linux-interested crowd, but this is something i really didn’t know, so i figured this would be the perfect place to notate it.
outlook generally only displays the To and CC fields. what about if you need to send emails using BCC (blind carbon copy)? well, then you can simply press Ctrl+Shift+B to get the screen that displays the BCC field. then, type your addresses there.
that was easy.
backing up and restoring databases using phpMyAdmin.
i’m very much a command-line-interface kind of girl, but not everyone has shell access. hopefully, they do have phpMyAdmin, one of the most essential tools for managing your database.
the tutorial below gives you an idea of how to perform a database backup and restore using phpMyAdmin.
database backup.
1. open your phpMyAdmin webpage. hopefully, you have a link to it somewhere in your website’s control panel.
2. click on the database name on the left-hand navigational bar (generally in black text, with a number in parentheses indicating the number of tables in the database)
3. click on Export on the top of the screen.
5. check the option next to Save as File
4. hit the Go button.
5. download the *.sql file.
you can open this file in Notepad or any text editor and see the database schema with all of its contents intact.
database restore.
1. open your phpMyAdmin page.
2. click the database name on the left-hand navigational bar.
3. assuming you’re replacing the current database, you first need to drop all of the tables, so make sure to check all of the tables (the checkbox is immediately to the left of the Table column)
4. in the With selected: drop down box, choose Drop
5. now that you’ve emptied out your database, click on SQL on the top bar.
6. click the Browse button and locate the *.sql file that you created in your backup.
7. click Go.
you’ve now restored your database.
if you wanted to create an entirely new database with the same schema, simply open the *.sql file you created in your backup, and on the 7th line or so, you will see Database: `db_name`. just replace this with the new database name you wish to create.
mysql dump and restore.
i’ve seen a tremendous amount of different syntaxes for the most simplest database dumps and restore. why worry about the details? this is how you perform a database backup:
$ mysqldump -u username -p databasename > filename.sql
type your password at the prompt, wait a few seconds, and you now have a perfectly good .sql file that represents your database schema and all the data within.
to restore your database, use the following:
$ mysql -u username -p databasename < filename.sql
enter your password once again at the prompt. this input assumes that you have a file named filename.sql in the directory in which you are performing the backup. you can reference the /path/to/filename.sql as well.
that’s all there is to it. if you need to change any data before the restore, simply edit the filename.sql with your favorite editor (but for your own sake, make sure it’s not that large!)
simple backup scripts.
i think there’s a golden rule in system administration: never underestimate the importance of backups. i have a few small scripts that you can use with rsync or just local that i decided may be useful, and i’m going to share them here. maybe later on when i master rsync, i’ll post a tutorial about it here as well.
a mysql backup may work for you as follows:
mysqldump -A –password=xxxxxx | gzip > /root/mysql_backup/`date +%m%d%y_%T`.sql.gz
on the other hand, if you want to backup all qmail for a particular domain, you can attempt to do a job as such:
tar zcvf /root/domain/mailbackup-`date +%m%d%y_%T`.tar.gz /var/qmail/mailnames/domain.com/
i have a cron job that runs nightly to execute these tasks, and i find them rather useful, especially since the inclusion of the timestamp gives me an idea of what could have happened and when.