We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.



Counting the number of commas in CSV format

 $ perl -ne 'print tr/,//, "\n"' < file.csv | sort -u

— by Janos on Dec. 1, 2013, 1:03 p.m.


Sometimes I need to know if a CSV file has the right number of columns, and how many columns there are.

The tr/// operator in perl is normally used to convert a set of characters to another set of characters, but when used in a scalar context like in this example, it returns the number of matches of the specified characters, in this case a comma.

The perl command above prints the number of commas in every line of the input file. sort -u sorts this and outputs only the unique lines. If all lines in the CSV file have the same number of commas, there should be one line of output. The number of columns in the file is this number + 1.


This one-liner does not handle the more general case when the columns may have embedded commas within quotes. For that you would need a more sophisticated method. This simple version can still be very useful in many common cases.


Count the lines of each file extension in a list of files

 $ git ls-files | xargs wc -l | awk -F ' +|\\.|/' '{ sumlines[$NF] += $2 } END { for (ext in sumlines) print ext, sumlines[ext] }'

— by Janos on Nov. 9, 2013, 11:49 a.m.


The pipeline:

  • git ls-files -- produces the list of files in a Git repository. It could be anything else that produces a list of filenames, for example: find . -type f
  • xargs wc -l -- run wc -l to count the lines in the filenames coming from standard input. The output is the line count and the filename
  • The final awk command does the main work: extract the extension name and sum the line counts:
  • -F ' +|\\.|/' -- use as field separator multiples of spaces, or a dot, or a slash
  • { sumlines[$NF] += $2 } -- $NF contains the value of the last field, which is the filename extension, thanks to the dot in the field separator, and $2 contains the value of the second field in the input, which is the line count. As a result, we are building the sumlines associative array, summing up the line counts of files with the same extension
  • END { for (ext in sumlines) print ext, sumlines[ext] }' -- After all lines have been processed, print the extension and the line count.


Add all unknown files in a Subversion checkout

 $ svn add . --force

— by Janos on Sept. 24, 2013, 7:59 a.m.


Adding all unknown files in a working tree is usually very simple in other version control systems, for example:

git add .
bzr add

Not so simple in Subversion:

$ svn add .
svn: warning: '.' is already under version control

But if you add the --force flag, that will do!

Keep in mind that this is not the same as:

svn add * --force

That would add not only unknown files, but ignored files too, which is probably not your intention. Make sure to specify directories explicitly, avoid using * with this command.


Find files that are not executable

 $ find /some/path -type f ! -perm -111 -ls

— by Janos on Sept. 18, 2013, 9:14 p.m.


The key is writing the parameter of -perm correctly. The value -111 means that all execution bits must be set: user and group and other too. By negating this pattern with ! we get files that miss any of the execution bits.

If you want to be more specific, for example find files that are not executable specifically by the owner, you could do like this:

find /some/path -type f ! -perm -100 -ls

The -ls option is to print the found files using a long listing format similar to the ls command.


Find which log files contain or don't contain a specific error message

 $ for i in *.log; do grep OutOfMemo $i >/dev/null && echo $i oom || echo $i ok; done

— by Janos on Sept. 13, 2013, 3:43 p.m.


In this example I was looking for a list of log files which contain or don't contain a stack trace of OutOfMemoryError events.

  • for i in *.log is to loop over the list of files.
  • For each file, I run grep, but redirect the output to /dev/null, as I don't need that, I just want to see a "yes or no" kind of summary for each file
  • grep exits with success if it found any matching lines, otherwise with failure. Using the pattern cmd && success || failure, I echo the filename and the text "oom" in case of a match, or "ok" otherwise


  • Using grep -q is equivalent to redirecting output to /dev/null, but might not be supported in all systems
  • grep -l can be used to list files with matches, and grep -L to list files without matches, but the latter does not exist in some implementations of grep, such as BSD
  • I realized it a bit late, but grep -c shows a count of the matches, so actually it could have been a suitable and simpler solution


Create a transparent image of given dimensions

 $ convert -size 100x100 xc:none transparency.png

— by Janos on July 31, 2013, 11:32 p.m.


  • convert is a tool that's part of the ImageMagick image manipulation library
  • -size 100x100 specifies the dimensions of the image to create
  • xc:none is a symbolic source image, indicating to convert "from nothing"
  • transparency.png is the destination filename, the image format is automatically determined by the extension


Requires the ImageMagick image manipulation library.


Create a heap dump of a Java process

 $ jmap -dump:format=b,file=/var/tmp/dump.hprof 1234

— by Janos on July 8, 2013, 8:18 a.m.


  • Create a heap dump from the running Java process with PID=1234
  • The heap dump will be saved in /var/tmp/dump.hprof in binary format
  • You can open the dump with "MAT", the Memory Analyzer Tool (based on Eclipse) and identify objects that use most of the memory and potential memory leaks

For more options see jmap -h


Insert lines from one text file to another one

 $ awk 'NR % 10 == 1 {getline f2 < "file1"; print f2} 1' file2 | cat -n

— by openiduser102 on June 22, 2013, 9:30 a.m.


An alternative with line numbers.


Insert lines from one text file to another one

 $ sed -re ':a;Rfile1' -e 'x;s/^/./;/.{10}/!{x;ba};s/.*//;x' file2

— by openiduser102 on June 22, 2013, 9:29 a.m.


This command reads the first line from file2 and then 10 lines from file1, then the second line from file2 and the next 10 lines from file1 and so on.


Works in GNU sed.


Check that a directory is a parent of another

 $ is_parent() { [[ "$2" =~ $1/? ]]; }

— by Couannette on June 13, 2013, 11:03 p.m.


The function expanded would look like this :

T() {
    if [[ "$2" =~ $1/? ]]; then
        echo "$2 is child of $1"
        return 0
        echo "$2 is NOT child of $1 ($?)"
        return 1


Create fattal tone mapped images from a directory of raw images

 $ for img in /path/to/rawimages/*.RW2; do pfsin ${img} | pfssize -x 1024 -y 768 | pfstmo_fattal02 -v -s 1 | pfsout /path/to/finished/${img%%}.jpg; done

— by mmaki on June 3, 2013, 10:45 p.m.


for img in /path/to/rawimages/*.RW2; do Loop through image directory

pfsin ${img} | read the raw image

pfssize -x 1024 -y 768 | resize it to 1024x768 because fattal looks better at low resolutions

pfstmo_fattal02 -v -s 1 | use the fattal tone mapping operator, be verbose and saturation value of 1

pfsout ./path/to/finished/${img%%}.jpg; done output and rename the file as a jpg.

Examples of fattal tone mapped images http://goo.gl/IayQQ

pfstools website http://pfstools.sourceforge.net/


Portrait orientation images need to be processed -x 768 -y 1024


Send a file by email as attachment

 $ uuencode /var/log/messages messages.txt | mailx -s "/var/log/messages on $HOST" me@example.com

— by Janos on May 26, 2013, 9:37 a.m.


  • uuencode /var/log/messages messages.txt -- the first parameter is the file to attach, the second is the filename to use for the attachment
  • mailx -s subject emailaddress -- takes standard input as the content of the email


Calculate md5sum from an input string

 $ md5sum <<< YOUR_TEXT | cut -f1 -d' '

— by kowalcj0 on May 17, 2013, 8:17 p.m.


Calculate a MD5 sum/digest from an input string

Wrap it up in a function:

function md5() { md5sum <<< $1 | cut -f1 -d' '; }

Example usage:

md5 "this is a long string test_string"
md5 singleWordExample


Get streamed FLV from Chrome with lsof

 $ export psid=$(pgrep -f libflashplayer.so); cp /proc/$psid/fd/$(lsof -p $psid | grep eleted | awk {' print $4 '} | sed -e "s/[a-z]//g") saved.flv

— by GNA on May 11, 2013, 10:55 p.m.


first get the process id of the chome browser process which runs the flashplayer export it to a variable to be used later. Then we get in subshell the filedescriptor which is marked deleted and construct the /proc path for the in memory fileimage and copy it to the file named saved.flv


IMPORTANT: only one video should be open to play in chrome


Rename all files in a directory to upper case

 $ for i in *; do mv "$i" "${i^^}"; done

— by EvaggelosBalaskas on April 20, 2013, 9:53 p.m.


Loop over the items in the current directory, and use Bash built-in case modification expansion to convert to upper case.


The case modification extension is available since Bash 4.


Print file owners and permissions of a directory tree

 $ find /path/to/dir1 -printf "%U %G %m %p\n" > /tmp/dir1.txt

— by Janos on March 19, 2013, 10:51 p.m.


The command simply traverses the specified directory tree and for each file and directory it prints the UID of the owner, GID of the group, the permission bits and the path.

To compare file owners and permissions of two directory trees you can run this command for each directory, save the output in two files and then compare them using diff or similar.

See man find for more explanation of all the possible symbols you can use with -printf


The -printf option does not exist in find on Solaris 10.


Sort and remove duplicate lines from two (or more files). Display only uniq lines from files.

 $ sort file1 file2 | uniq -u

— by EvaggelosBalaskas on March 6, 2013, 8:58 a.m.


The -u flag of uniq removes duplicate lines from the input.

Example file1:


Example file2:





Get only the latest version of a file from across mutiple directories

 $ find . -name custlist\* | perl -ne '$path = $_; s?.*/??; $name = $_; $map{$name} = $path; ++$c; END { print $map{(sort(keys(%map)))[$c-1]} }'

— by Janos on Feb. 23, 2013, 4:23 p.m.


The purpose of the one-liner is to find the the "latest" version of the custlist_*.xls file from among multiple versions in directories and sub-directories, for example:


Let's decompose the one-liner to the big steps:

  • find . -name custlist\* -- find the files matching the target pattern
  • ... | perl -ne '...' -- run perl, with the input wrapped around in a while loop so that each line in the input is set in the variable $_
  • $path = $_; s?.*/??; $name = $_; -- save the full path in $path, and cut off the subdirectory part to get to the base name of the file and save it in $name
  • $map{$name} = $path; -- build a mapping of $name to $path
  • ++$c; -- we count the elements, to use it later
  • (sort(keys(%map)))[$c-1] -- sort the keys of the map, and get the last element, which is custlist_v2.001.xls in this example
  • END { print $map{$last} }' -- at the end of all input data, print the path of the latest version of the file


Even if the latest version of the file appears multiple times in the directories, the one-liner will print only one of the paths. This could be fixed though if needed.


Recreate or update an existing zip file and remove files that do not exist anymore

 $ zip --filesync -r /path/to/out.zip /path/to/dir

— by Janos on Jan. 26, 2013, 8:48 p.m.


zip does not have an explicit option to overwrite/recreate an existing zip file. If the specified destination file already exists, zip updates it. The problem is that files you did not specify to add to the zip but they already existed in the zip, will not be removed.

For example let's say you created a zip file from a directory with the command:

zip -r /path/to/out.zip /path/to/dir

Next you delete some files from the directory and repeat the command to recreate the zip. But that will not recreate the zip, it will only update it, and so the file you deleted from the directory will still be there in the zip.

One way to recreate the zip is to delete the file first. Another, better way is to use the --filesync or -FS flag. With this flag zip will remove files from the zip that do not exist anymore in the filesystem. This is more efficient than recreating the zip.


How to expand a CIDR notation to its IPs

 $ for j in $(seq 0 255); do for i in $(seq 0 255) ; do seq -f "10.$j.$i.%g" 0 255; done; done

— by EvaggelosBalaskas on Jan. 16, 2013, 11:53 a.m.


Using two for loops to create second & third block of IP and finally through a formatted seq to printf the output.

More efficient/using less memory than using {..} (range).


seq is not available by default in some systems.


Make the output of the `time` builtin easier to parse


— by Janos on Dec. 4, 2012, 10:43 p.m.


The time builtin prints a summary of the real time, user CPU time and system CPU time spent executing commands, for example:

$ time sleep 1

real    0m1.002s
user    0m0.000s
sys     0m0.002s

If you need to parse this output, it helps to simplify it using the TIMEFORMAT variable. The value %R means "the elapsed time in seconds", for example:

$ time sleep 1

The complete documentation of the format definition is in man bash, search for TIMEFORMAT.


Remove EXIF data such as orientation from images

 $ mogrify -strip /path/to/image.jpg

— by Janos on Oct. 24, 2012, 12:08 a.m.


I use this mostly to remove orientation information from images. My problem with orientation information is that some viewers don't support it, and thus do not show the image correctly oriented. Rotating the image doesn't help, because if I make the image look correct in the viewer that doesn't support orientation, that will break it in the viewer that does support orientation. The solution is to remove the orientation information and rotate the image appropriately. That way the image will always look the same in all viewers, regardless of support for the orientation information.

The tool mogrify is part of ImageMagick, an image manipulation software. It manipulates image files and saves the result in the same file. A similar tool in ImageMagick that saves the result of manipulations is convert, you can use it like this:

convert -strip orig.jpg stripped.jpg


The tool is part of ImageMagick, an image manipulation software.


Get the last modification date of a file in any format you want

 $ date -r /etc/motd +%Y%m%d_%H%M%S

— by Janos on Oct. 17, 2012, 4:42 p.m.


The -r flag is a shortcut of --reference and it is used to specify a reference file. Used in this way, the date command prints the last modification date of the specified file, instead of the current date.

The + controls the output format, for example:

  • %Y = 4-digit year
  • %m = 2-digit month
  • %d = 2-digit day
  • %H = 2-digit hour
  • %M = 2-digit minutes
  • %S = 2-digit seconds

So in this example +%Y%m%d_%H%M%S becomes 20121001_171233

You should be able to find all the possible format specifiers in man date.


The default date command in Solaris does not support the --reference flag. Modern Solaris systems have the GNU tools installed, so you may be able to find the GNU implementation of date which supports this flag. Look for it in /usr/gnu/bin/date or /usr/local/bin/date, or do search the entire /usr with find /usr -name date.

In Solaris this may be a suitable substitute without using the date command:

ls -Ego /etc/motd | awk '{print $4 "_" $5}' | tr -d :- | sed -e 's/\..*//'

Or you can use good old perl:

perl -mPOSIX -e 'print POSIX::strftime("%Y%m%d_%H%M%S\n", localtime((stat("/etc/motd"))[9]))'


Forget all remembered path locations

 $ hash -r

— by Janos on Oct. 14, 2012, 9:46 a.m.


bash remembers the full path name of each command you enter, so it doesn't have to lookup in $PATH every single time you run the same thing. It also counts the number of times you used each command in the current session, you can see the list with hash.

Anyway, this behavior can poses a small problem when you reinstall an application at a different path. For example you reinstall a program that used to be in /usr/local/bin and now it is in /opt/local/bin. The problem is that if you used that command in the current shell session, then bash will remember the original location, which of course doesn't work anymore. To fix that, you can either run hash cmd which will lookup the command again, or run hash -r to forget all remembered locations (less efficient, but maybe faster to type ;-)

For more details, see help hash


Rename files with numeric padding

 $ perl -e 'for (@ARGV) { $o = $_; s/\d+/sprintf("%04d", $&)/e; print qq{mv "$o" "$_"\n}}'

— by Janos on Oct. 6, 2012, 1:38 p.m.


Basically a one-liner perl script. Specify the files to rename as command line parameters, for example:

perl -e '.....' file1.jpg file2.jpg

In this example the files will be renamed to file0001.jpg and file0002.jpg, respectively. The script does not actually rename anything. It only prints the shell commands to execute that would perform the renaming. This way you can check first that the script would do, and if you want to actually do it, then pipe the output to sh like this:

perl -e '.....' file1.jpg file2.jpg | sh

What's happening in the one-liner perl script:

  • for (@ARGV) { ... } is a loop, where each command line argument is substituted into the auto-variable $_.
  • $o = $_ :: save the original filename
  • s/// :: perform pattern matching and replacement on $_
  • print qq{...} :: print the mv command, with correctly quoted arguments


The script does not cover all corner cases. For example it will not work with files that have double-quotes in their names. In any case, it is safe to review the output of the script first before piping it to sh.

If your system has the rename command (Linux), then a shortcut to do the exact same thing is with:

rename 's/\d+/sprintf("%04d", $&)/e' *.jpg

It handles special characters better too.