Archive for August, 2010

Bash Tip! for loop on directory listing

Tuesday, August 24th, 2010

One very common task when scripting with bash is to use a for loop to iterate over the contents of a directory or directory tree. There are two primary methods of accomplishing this task; using ls and using find. We’ll not consider the manual method as that would be completely unworthy of our attention.

I find it easy to start with ls when I don’t need to recurse into a directory tree as that is a command that I use often. This often turns into a process such as this:

for dir in $(ls)
do
  echo ${dir}
done

Now the above method typically does not work for me. I have an alias setup to print out pretty colors when I issue the ls command and that will cause each command which operates on the variable $dir to fail with a “No such file or directory” error. I always have to remember this and re-write the command with the flag to disable color formatting:

for dir in $(ls --color=never)
do
  echo ${dir}
done

The above script will work every time.

The next option is using find. find is awesome and all powerful. Learn and use find. The most common issue when using find is that you may have to filter out the current and/or parent directories when processing the results. Take this example:

for dir in $(find . -maxdepth 1 -type d)
do
  echo ${dir}
done

This loop will print out the current directory, as well as all other directories in the current working directory. If you are running some sort of processing within this loop, you may end up re-processing everything unless you discard the current working directory (noted by the dot).

This example will not process the current working directory:

for dir in $(find . -maxdepth 1 -type d)
do
  if [ ${dir} == "." ]
  then
    continue
  fi
  echo ${dir}
  while pushd ${dir}
  do
    echo ${dir}
  done
  popd
done

Bash for loops are incredibly useful and easy to work with. Use the above tips and make bash work for you.

CIFS over SSH – Extending the network

Friday, August 6th, 2010

I recently had an issue where a file copy from a celerra NAS to a server outside the network was failing and I couldn’t figure out why. The file copy was a pull from the outside server which needed access inside the network. The BGP route had somehow changed to go over Integra’s network rather than Verizon and I couldn’t get anyone to fess up to blocking ports 445 and 139. To solve this issue, I turned to SSH tunnelling.

To setup a tunnel from inside a protected network to expose a resource to an external client, you can use the following format:

$ sudo ssh -N -R 445:cifsNAS:445 outsideserver.com

I then created a hosts file entry on the outside server to map cifsNAS to 127.0.0.1.

#/etc/hosts
127.0.0.1  cifsNAS

What this does is SSH to outsideserver.com and open up port 445 on that host, which will then tunnel all traffic from outsideserver1:445 to cifsNAS:445. This solved my temporary issue and I was able to copy the needed files over.