linux: setting up traffic to go through proxy server

1. for browser: should be able to edit them via the preferences section.

2. command line:

export http_proxy="http://username:password@proxy:port/"
export https_proxy="ftp://username:password@proxy:port/"
export ftp_proxy="https://username:password@proxy:port/"

we should be able to automate this by adding to ~/.bashrc

3. apt:
In /etc/apt/apt.conf.d/80oproxy, add

Acquire::http::proxy "http://username:password@proxy:port/";
Acquire::ftp::proxy "ftp://username:password@proxy:port/";
Acquire::https::proxy "https://username:password@proxy:port/";

Removing sensitive data from git

suppose one accidentally commited a file with sensitive data ages ago, others could actually retrieve the passwd easily since git keeps a history of the changes. So the idea is to remove the file from git altogether with file changed history, then recommit the affected file (without sensitive data this time) again.

do a git pull and git fetch –tags on original repo, then copy the repo to a tmp repo and apply the commands to the tmp repo like so:

git pull
git fetch --tags
git filter-branch --index-filter 'git rm --cached --ignore-unmatch wp-config.*' --tag-name-filter 'cat' HEAD --all
(if you have uncommited changes, you will get "Cannot rewrite branches with a dirty working directory." error. Do a git commit to fix the error.)

After that, copy the affected files (wp-config.* in this case) from the old repo back to the tmp repo and force push from the tmp repo.

git push origin master --force

do it for other branches affected.

If using github, the only way is to remove the repo and recreate a new one. In the tmp repo

git push origin master
(or push any other branches if need be)
git push --tags

see also and

compare file changed in different directories

it might be useful to compare different directories for file changed. the unix diff command rocks.

for example, this command compares 2 directories, ie blogs and blogs.bak recursively, output only the files changed and ignore any files named .git

diff -qr -x .git blogs blogs.bak

Updating ruby to version 1.9.1 in ubuntu 8.04

unfortunately, ubuntu 8.04 hardy comes with ruby version 1.8.6 by default. The latest rubygems doesn’t work well with this version. so we need to get a new version for ruby.

might want to check if you have these libraries first:

sudo apt-get install libncurses5-dev
sudo apt-get install libreadline5-dev

Download the ruby the latest source and compile it.

tar -xzvf ruby.19.1-p243.tar.gz
cd ruby1.9.1-p243
make install

if getting errors like so:

cd ruby.19.1-p243/ext/readline
ruby extconf.rb
sudo make install

If you have an old gem running, you can update it by

gem update --system

else you might want to visit to get the latest file... then extract and compile it just like before.

Testing emails in your local vm

testing emails can be tricky because you dont really want to send emails to real users. So we really need to turn emails service on and off as and when needed.

To stop postfix:
– stop postfix in rc.x so that it doesnt boot up with postfix started.
– /etc/init.d/postfix stop (to stop postfix if it is running now)

when you need postfix, delete queue first

check mail queue with “mailq”. then delete all the queues:
# postsuper -d ALL
# postsuper -d ALL deferred

now start postfix “/etc/init.d/postfix start” and do all your testing. when done, remember to turn postfix off again with “/etc/init.d/postfix stop”

How to increase the size of virtual machines in Virtualbox

If you have created a 20G vm in the first place and want to double the disk size, how do you do it?? There are alot of posts in different forums but there is no easy way of doing it. However, the theory behind increasing the disk size is not difficult and should work for all distros.

step 1. Create a new empty disk (set it to the new size you want) and attach it to the SATA controller, ie you will now boot up with 2 disks instead of 1. In this instance, say my old 20G ubuntu drive is /dev/sda2 and my new 40G ubuntu drive is /dev/sda1

step 2. Get a linux rescue disk – Some installation iso comes with it. Boot up with the iso. The idea is to be able to boot up without using the hard disk so that we can perform some magic on the disks. I am using the ubuntu 8.1 installation iso and it works cool for me. Upon booting up, I simple select “repair installation” and follow the prompts. At the end of everything, you should be in the command prompt.

step 3. copy /dev/sda2 over to /dev/sda1

dd if=/dev/sda2 of=/dev/sda1 conv=notrunc

step 4. Shut down the vm. Now in virtualbox, detach the old 20G drive so that the ubuntu vm boots up with the new 40G drive. Now boot up the vm.

step 5. Depending on the way you partition your tables, you can expand whichever partition you wish. My partition table is simple:

root@ubuntu:~/projects/blogs# fdisk -l /dev/sda

Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b73d4

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        4973    39945591   83  Linux
/dev/sda2            4974        5221     1992060   82  Linux swap / Solaris

/dev/sda1 is mapped to my root folder and I want to expand it, so I

resize2fs /dev/sda1

**** ALL DONE!!

Making every row uniq in multiple files

There are times when we want uniq rows in different files. One good example is when we have emails that spans across multiple csv and we dont want them to be replicated. I wrote this script and it seems to do the job well…. Depending on the size of the files, it might take a long time to process – you have been warned!

I called this program “uniqrow”


# make the first file uniq first
awk '!x[$0]++' $1 > $$.tmp
rm $1
mv $$.tmp $1

# now loop compare each file
for (( i=1; i<$#; i++ ))
  # get curr pointer
  CURR=`eval echo \\$$i`;
  echo "processing $CURR ..."

  for (( j=(($i+1)); j<=$#; j++ ))
    # get next pointer
    NEXT=`eval echo \\$$j`;
    echo "removing duplicates in $NEXT from $CURR ..."

    cat $CURR | while read a; do cat $NEXT | while read b; do if [ "$a" = "$b" ]; then sed -i "/$b/d" $CURR; break;fi; done; done;


echo "All your files now have unique rows!!";
exit 0;

Advance SVN post-commit hook script for System Administrators

Everyone knows how cool svn post-commit feature is. Instead of using the basic post-commit script provided by svn, we can do alot more with a bit of server side scripting. Here, I like to share a simple script that I wrote to automate the process of updating different server system files using svn post-commit. It works for me… it might work for you as well. There are of course alot of areas that needs improvement.

The first thing to do is to have the svn repo setup with the server name as the top dir and all sub directories mirror exactly the same way as the system dir. I have 2 servers here, ares.stag and for example:

|-- ares.stag
|   `-- home
|       `-- data
|               `-- vhost.ares.conf
|   |-- mutt
|   |   `-- muttrc
|   |-- vim
|   |   `-- vimrc
|   `-- xen
|   |-- var
|       `-- named
|           `-- chroot
|               |-- etc
|               |   `-- named.conf
|               `-- var
|                   `-- named
|                       |--
|                       |--
|                       |--
|                       `-- linux.stag.db

Everytime I want to edit my stag dns for example, I dont need to manually do it in the server, I just need to edit linux.stag.db in my own desktop, then “svn commit”. The commit script is responsible to then put the files that I committed to the right place for me.


# - This is a more advanced svn post-commit script.
# - This is currently used to rollover server config.
# - The script attempts to copy files from different server based on the
# current svn dir hierarchy.
# - ssh keys from user@{current_server} to root@{external_server} must exists.
# - The file in the server must exists else this script will fail.
# - It rolls over modified files, it doesn't delete file.
# author: Bernard Peh
# date: 16 April 2010
# version 1.0
# script created.


# define your admin for email alert

# define log. Leave it as default if you want

# output svn details into log
svn log -v --xml -r$REV file://$REPOS > $LOG

# full path to command

# only copy committed files that are new or modified
$CAT $LOG | $GREP ' action=' | while read x; do SERVER=`echo $x | $AWK -F/ '{print $2}'`; RES=`echo $x | $SED 's/action=\"\(.*\)\">\/'$SERVER'\(.*\)<\/path>/\1 \2/'`; PATH=`echo $RES | $GREP '^\(A\|M\)'`; PATH=`if [ ! -z "$PATH" ]; then echo $PATH | $AWK '{print $2}'; fi;`; $SVN cat file://${REPOS}/${SERVER}${PATH} 2>/dev/null | $SSH root@$SERVER "cat - > $PATH" 2>> $USERLOG; if [ !$? ]; then echo "Successfully updated ${SERVER}:${PATH}" >> $USERLOG; fi; done

# mail to admin
$CAT "$USERLOG" | mail -s "$REPOS updated to rev $REV" $EMAIL

# clean up
rm -rf $TMP
rm -rf $LOG
rm -rf $USERLOG

# all good now exit
exit 0;



I want to be able to browse the svn dir from a website like and at the same time, be able to checkout from the repository via


First of all, install mod_dav_svn

Then in the apache config, I am using vhost for example

<VirtualHost *:80>
  DocumentRoot /home/data/
  # this is for svn
  <Location /svn.php>
    RewriteEngine on
    RewriteRule svn.php/([^/\.]+) /svn/$1 [L]
  <Location /svn>
    DAV svn
    # SVNPath /home/data/svn/
    SVNParentPath /home/data/svn
    # Limit write permission to list of valid users.
    # Require SSL connection for password protection.
    # SSLRequireSSL
    AuthType Basic
    AuthName "SVN repository"
    AuthUserFile /etc/httpd/conf.d/subversion.passwd
    Require valid-user

In my document root, /home/data/, I have a svn.php file soft-linked to /home/data/svn (the place where all svn repository are). As you can see from above, I also use a basic authentication mechanism for anyone who wishes to checkout from the repo.

Implementing SSL Certificates in Apache

Creating a Private Key

To create a private key without triple des encryption, use the following command:

openssl genrsa -out ssl.key 1024

Creating a Certificate Signing Request

To obtain a certificate signed by a certificate authority, you will need to create a Certificate Signing Request (CSR). The purpose is to send the certificate authority enough information to create the certificate without sending the entire private key or compromising any sensitive information. The CSR also contains the information that will be included in the certificate, such as, domain name, locality information, etc.

Locate the private key that you would like to create a CSR from. Enter the following command:

openssl req -new -key filename.key -out filename.csr

You will be prompted for Locality information, common name (domain name), organizational information, etc. Check with the CA that you are applying to for information on required fields and invalid entries. Send the CSR to the CA per their instructions.

Wait for your new certificate and/or create a self-signed certificate. A self-signed certificate can be used until you receive your certificate from the certificate authority.

It is not necessary to create a self-signed certificate if you are obtaining a CA-signed certificate. However, creating a self-signed certificate is very simple. All you need is a private key and the name of the server (fully qualified domain name) that you want to secure. You will be prompted for information such as locality information, common name (domain name), organizational information, etc. The only required field for the certificate to function correctly is the common name (domain name) field. If this is not present or incorrect, you will receive a Certificate Name Check warning from your browser.

To create a self-signed certificate

openssl req -new -key filename.key -x509 -out filename.crt

Configuring your Apache Server

An example of a secure virtual host:

   <VirtualHost 123.456.789.42:443>
   DocumentRoot /etc/httpd/htdocs
   ErrorLog /etc/httpd/logs/error_log
   TransferLog /etc/httpd/logs/access_log
   SSLEngine on
   SSLCertificateFile /etc/httpd/conf/ssl.crt/server.crt
   SSLCertificateKeyFile /etc/httpd/conf/ssl.key/server.key
   SSLCACertificateFile /etc/httpd/conf/ssl.crt/ca-bundle.crt
   <Files ~ "\.(cgi|shtml)$">
         SSLOptions +StdEnvVars
   <Directory "/etc/httpd/cgi-bin">
         SSLOptions +StdEnvVars
   SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
   CustomLog /etc/httpd/logs/ssl_request_log \
             "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

The directives that are the most important for SSL are the SSLEngine on, SSLCertificateFile, SSLCertificateKeyFile, and in many cases SSLCACertificateFile directives.