Archive for April, 2011

git log – good summary

Tuesday, April 19th, 2011

I have recently started using git and have struggled to find a good short summary of recent changes. This command does the trick:

git log --name-status HEAD^..HEAD
commit 1x3ga6g08f1c4324141f966d9766i86c6a790921
Author: Josh Miller
Date:   Tue Apr 19 07:19:15 2011 -0700

    removing this file to test post-receive delete operation, take 1

D       var/www/application/css/test07.css

Using the --name-status and the HEAD^..HEAD (parent to HEAD up to HEAD) give all changes made with the most recent receive.

PHP Fatal error: Uncaught exception ‘RequestCore_Exception’ with message ‘The stream size for the streaming upload cannot be determined.’

Thursday, April 14th, 2011

While attempting to upload some large files (>2GB) to S3 yesterday, I ran across this error (on both sdk 1.2.3 and 1.3.2):

php s3-upload.php
Uploading file:  myfile.bak
PHP Fatal error:  Uncaught exception 'RequestCore_Exception' with message 'The stream size for the streaming upload cannot be determined.' in /home/josh/aws/sdk-1.2.3/lib/requestcore/requestcore.class.php:771
Stack trace:
#0 /home/josh/aws/sdk-1.2.3/services/s3.class.php(722): RequestCore->prep_request()
#1 /home/josh/aws/sdk-1.2.3/services/s3.class.php(1342): AmazonS3->authenticate('db-backup-resto...', Array)
#2 /home/josh/aws/bin/s3-upload.php(73): AmazonS3->create_object('db-backup-resto...', 'myfile.bak', Array)
#3 {main}
  thrown in /home/josh/aws/sdk-1.2.3/lib/requestcore/requestcore.class.php on line 771

I started to troubleshoot the error but as I had already successfully uploaded another smaller file, I suspected it was a problem with the size of the file I was uploading so I searched the AWS forums first. I found that there is a problem with uploading files greater than 2GB from a 32 bit machine due to the fact that fstat() and filesize() return the file size as 32-bit signed integer.

The fix is to use a 64 bit machine and perform the upload.

Configuring Apache to Proxy Requests to Tomcat

Wednesday, April 13th, 2011

A very common task when administering apache and/or tomcat is to setup apache to proxy requests to tomcat.  The primary driver to using this configuration is to get apache to handle all of the front-end requests and some caching with tomcat serving up dynamic content (which is proxied through apache). This also allows apache to handle much of the security as it gets much more exposure to the internet at large than tomcat and has a great track record in this regard.  You can also benefit from standing up multiple tomcat instances behind two or more apache instances to allow you to scale more effectively and where it is needed.

Before we get started with configuration, first install apache and tomcat. This is typically done using the package manager of your distribution. Using yum, it would be as follows:

yum install httpd tomcat6

Next, set both daemons to persist (start on boot) using chkconfig or your distributions method of choice and start each daemon:

chkconfig tomcat6 on
chkconfig httpd on
/etc/init.d/tomcat6 start
/etc/init.d/httpd start

Next, configure apache to load the appropriate modules needed to proxy requests to tomcat by modifying /etc/httpd/conf/httpd.conf (or appropriate configuration file for your distribution):

LoadModule proxy_module modules/
LoadModule proxy_ajp_module modules/
LoadModule proxy_balancer_module modules/

Next, configure the virtual host that you’ll be using to proxy requests to tomcat (be sure to replace the port and IP with entries suitable to your environment):

<Proxy balancer://localhost>
  BalancerMember ajp:// min=10 max=100 loadfactor=1
ProxyPass / ajp://localhost/

Once that configuration is complete, restart or reload apache to take effect:

apachectl graceful

Note that this configuration relies up on tomcat and apache being on the same server and you can easily configure apache to proxy requests to tomcat on another server or VIP by replacing the localhost/ occurrences above with the VIP, IP, or hostname of the tomcat instance(s).

HTTP Caching in yum…

Friday, April 1st, 2011

yum is great to work with when it works and a pain in the ass when it does not. I recently had a problem where I would get the dreaded metadata does not match checksum warning while trying to update a CentOS 5.3 system I was working on.

filelists.xml.gz: [Errno -1] Metadata file does not match checksum

The problem here is that the repomd.xml which lists the file name and the SHA1 checksum that it should calculate on the file or the file itself is not being updated properly due to some level of HTTP caching between the yum client and the repository server.

Usually you can resolve this error through a metadata clean:

yum clean metadata

…or at the very least, clean all:

yum clean all

In this particular case, nothing seemed to work. I even removed the cache manually:

rm -rf /var/cache/yum/$OFFENDINGREPOSITORY

I also tried to copy the cache from another host which did not have the same problem:

$ another host> rsync -av /var/cache/yum/$OFFENDINGREPOSITORY /var/cache/yum/$OFFENDINGREPOSITORY

Something else was causing the problem — some caching server between me and the repository.

To resolve this issue, I relied on a tip from another site, to disable http caching at the yum.conf level:


This immediately resolved the problem and yum worked again.

I do not recommend continuing with this flag set in the config as caching is highly useful and makes things work faster. At minimum, it would be best to cache packages with the following option:


After going through this experience, I found another option that might have worked better, ‘yum clean expire-cache’. Next time this happens, I’d like to try this option out to see if it solves the problem.