We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.



Print the lines of file2 that are missing in file1

 $ grep -vxFf file1 file2

— by Janos on Feb. 8, 2012, 2:42 p.m.


  • -f is to specify a file with the list of patterns: file1
  • -F is to treat the patterns fixed strings, without using regular expressions
  • -x is to match exactly the whole line
  • -v is to select non-matching lines

The result is effectively the same as:

diff file1 file2 | grep '^>' | sed -e s/..//


The flags of grep might work differently depending on the system. So yeah you might prefer the second way which should work everywhere. Nonetheless the various of flags of grep are interesting.


Uses 'at' to run an arbitrary command at a specified time.

 $ echo 'play alarmclock.wav 2>/dev/null' | at 07:30 tomorrow

— by Anon5MAQumYj on Feb. 4, 2012, 11:03 a.m.


at 07:30 tomorrow schedules a job for 7:30 AM the next day, running whatever command or script is fed to it as standard input. The format for specifying time and date is rather flexible. http://tinyurl.com/ibmdwat

echo 'play alarmclock.wav 2>/dev/null' | feeds the play alarmclock.wav command to at, while 2>/dev/null causes the text output of play to be thrown away (we are only interested in the alarm sound).


Calculate an h index from an EndNote export

 $ MAX=$(NUM=1;cat author.xml |perl -p -e 's/(Times Cited)/\n$1/g'|grep "Times Cited" |perl -p -e 's/^Times Cited:([0-9]*).*$/$1/g'|sort -nr | while read LINE; do if [ $LINE -ge $NUM ]; then echo "$NUM"; fi; NUM=$[$NUM+1]; done;); echo "$MAX"|tail -1

— by openiduser14 on Feb. 4, 2012, 1:06 a.m.


EndNote?! I know but sometimes we have windows users as friends


Cut select pages from a pdf file and create a new file from those pages.

 $  pdftk input.pdf cat 2-4 7 9-10 output output.pdf

— by mmaki on Feb. 3, 2012, 6:50 a.m.


pdftk is the PDF Toolkit

input.pdf is the input file.

cat 2-4 7 9-10 concatenate (combine) pages 2,3,4,7,9,10 of input.pdf.

output output.pdf the resulting pdf file containing the above pages.


Find in files, recursively

 $ find /etc -type f -print0 2>/dev/null | xargs -0 grep --color=AUTO -Hn 'nameserver' 2>/dev/null

— by openiduser21 on Feb. 2, 2012, 7:32 p.m.


In the example above, find and display every file in /etc containing the string nameserver with the corresponding line, including line number, sample output:

/etc/ppp/ip-up.d/0dns-up:9:# Rev. Dec 22 1999 to put dynamic nameservers last.

/etc/ppp/ip-up.d/0dns-up:23:# nameservers given by the administrator. Those for which 'Dynamic' was chosen

/etc/ppp/ip-up.d/0dns-up:24:# are empty. 0dns-up fills in the nameservers when pppd gets them from the

/etc/ppp/ip-up.d/0dns-up:26:# 'search' or 'domain' directives or additional nameservers. Read the

/etc/ppp/ip-up.d/0dns-up:77:# nameserver lines to the temp file.


Re-compress a gzip (.gz) file to a bzip2 (.bz2) file

 $ time gzip -cd file1.tar.gz 2>~/logfile.txt | pv -t -r -b -W -i 5 -B 8M | bzip2 > file1.tar.bz2 2>>~/logfile .txt

— by DAVEB on Feb. 1, 2012, 6:02 p.m.


*Requires PV (pipe viewer) if you want to monitor throughput; otherwise you can leave out the pv pipe.

Transparently decompresses an arbitrary .gz file (does not have to be a tar) and re-compresses it to bzip2, which has better compression and error recovery. Echoes error messages to a file named logfile.txt in your home directory.

NOTE: The original .gz file will NOT be deleted. If you want to save space, you will have to delete it manually.


Test your hard drive speed

 $ time (dd if=/dev/zero of=zerofile bs=1M count=500;sync);rm zerofile

— by DAVEB on Feb. 1, 2012, 5:35 p.m.


Creates a 500MB blank file and times how long it takes to finish writing the entire thing to disk (sync)

time the entire dd + sync operation, and then remove the temporary file


Works with Bash; not tested in other environments


Recursively remove all empty sub-directories from a directory tree

 $ find . -depth  -type d  -empty -exec rmdir {} \;

— by openiduser16 on Jan. 31, 2012, 11:15 p.m.


Recursively remove all empty sub-directories from a directory tree using just find. No need for tac (-depth does that), no need for xargs as the directory contents changes on each call to rmdir. We're not reliant on the rmdir command deleting just empty dirs, -empty does that.


Will make many calls to rmdir without using xargs, which bunches commands into one argument string, which is normally useful, but -empty /could/ end up being more efficient since only empty dirs will be passed to rmdir, so possibly fewer executions in most cases (searching / for example).


Group count sort a log file

 $ A=$(FILE=/var/log/myfile.log; cat $FILE | perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' | sort -u | while read LINE; do grep "$LINE" $FILE | wc -l | perl -p -e 's/[^0-9]+//g'; echo -e "\t$LINE"; done;);echo "$A"|sort -nr

— by openiduser14 on Jan. 31, 2012, 6:49 p.m.


  • BASH: a temp var for the last sort: $A=$(
  • the file you want: FILE=/var/log/myfile.log
  • dump the file to a stream: cat $FILE |
  • cut out the bits you want to count: perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' |
  • get a unique list: sort -u |
  • for each line/value in the stream do stuff: while read LINE; do
  • dump all lines matching the current value to an inner stream: grep "$LINE" $FILE |
  • count them: wc -l |
  • clean up the output of wc and drop the value on stdout: perl -p -e 's/[^0-9]+//g';
  • drop the current value to stdout: echo -e "\t$LINE";
  • finish per value operations on the outer stream: done;
  • finish output to the temp var: );
  • dump the temp var to a pipe: echo "$A" |
  • sort the list numerically in reverse: sort -nr


Use ghostscript to shrink PDF files

 $ gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

— by openiduser10 on Jan. 31, 2012, 10:43 a.m.


Replace input.pdf and output.pdf with the original PDF name and the new compressed version's file name respectively. The key to this is the PDFSETTINGS option which can be tuned for various levels of compression. For scanned text documents, I find the ebook setting works well enough for most purposes but you can experiment with the options below.

  • -dPDFSETTINGS=/screen (screen-view-only quality, 72 dpi images)
  • -dPDFSETTINGS=/ebook (low quality, 150 dpi images)
  • -dPDFSETTINGS=/printer (high quality, 300 dpi images)
  • -dPDFSETTINGS=/prepress (high quality, color preserving, 300 dpi imgs) '-dPDFSETTINGS=/default (almost identical to /screen)'


How to find all hard links to a file

 $ find /home -xdev -samefile file1

— by openiduser7 on Jan. 30, 2012, 8:56 p.m.


Note: replace /home with the location you want to search

Source: http://linuxcommando.blogspot.com/2008/09/how-to-find-and-delete-all-hard-links.html


Find all the unique 4-letter words in a text

 $ cat ipsum.txt | perl -ne 'print map("$_\n", m/\w+/g);' | tr A-Z a-z | sort | uniq | awk 'length($1) == 4 {print}'

— by Janos on Jan. 29, 2012, 10:28 p.m.


  • The perl regex pattern m/\w+/g will match consecutive non-word characters, resulting in a list of all the words in the source string
  • map("$_\n", @list) transforms a list, appending a new-line at the end of each element
  • tr A-Z a-z transforms uppercase letters to lowercase
  • In awk, length($1) == 4 {print} means: for lines matching the filter condition "length of the first column is 4", execute the block of code, in this case simply print


Concatenate PDF files using GhostScript

 $ gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=output.pdf -dBATCH file1.pdf file2.pdf file3.pdf

— by Janos on Jan. 26, 2012, 8:51 a.m.


Free PDF editing software might become more and more available, but this method has been working for a long time, and likely will continue to do so.


It may not work with all PDFs, for example files that don't conform to Adobe's published PDF specification.


Format text with long lines to text with fixed width

 $ fmt -s -w80 file.txt

— by Janos on Jan. 22, 2012, 10:08 a.m.


  • It will break lines longer than 80 characters at appropriate white spaces to make them less than 80 characters long.
  • The -s flag will collapse multiple consecutive white spaces into one, or at the end of a sentence a double space.


Come back quickly to the current directory after doing some temporary work somewhere else

 $ pushd /some/where/else; work; cd /somewhere; work; cd /another/place; popd

— by Janos on Jan. 15, 2012, 11:12 p.m.


  • pushd, popd and dirs are bash builtins, you can read about them with help dirs
  • bash keeps a stack of "remembered" directories, and this stack can be manipulated with the pushd and popd builtins, and displayed with the dirs builtin
  • pushd will put the current directory on top of the directory stack. So, if you need to change to a different directory temporarily and you know that eventually you will want to come back to where you are, it is better to change directory with pushd instead of cd. While working on the temporary task you can change directories with cd several times, and in the end when you want to come back to where you started from, you can simply do popd.


Export a git project to a directory

 $ git archive master | tar x -C /path/to/dir/to/export

— by Janos on Jan. 12, 2012, 11:04 a.m.


The git archive command basically creates a tar file. The one-liner is to create a directory instead, without an intermediate tar file. The tar command above will untar the output of git archive into the directory specified with the -C flag. The directory must exist before you run this command.


Delete all tables of a mysql database

 $ mysql --defaults-file=my.cnf -e 'show tables' | while read t; do mysql --defaults-file=my.cnt  -e 'drop table '$t; done

— by Janos on Jan. 8, 2012, 7:53 a.m.


If you have a root access to the database, a drop database + create database is easiest. This script is useful in situations where you don't have root access to the database.

First prepare a file my.cnf to store database credentials so you don't have to enter on the command line:






Make sure to protect this file with chmod go-rwx.

The one-liner will execute show tables on the database to list all tables. Then the while loop reads each table name line by line and executes a drop table command.


The above solution is lazy, because not all lines in the output of show tables are table names, so you will see errors when you run it. But hey, shell scripts are meant to be lazy!


Run remote X11 applications with ssh

 $ ssh -X servername

— by versorge on Jan. 5, 2012, 7:50 a.m.


You could follow this command with any other call to an X app: xeyes &


If ssh forwarding is permitted on the ssh server


Calculate the total disk space used by a list of files or directories

 $ du -s file1 dir1 | awk '{sum += $1} END {print sum}'

— by Janos on Dec. 28, 2011, 8:42 p.m.


  • This is really simple, the first column is the size of the file or the directory, which we sum up with awk and print the sum at the end.
  • Use du -sk to count in kilobytes, du -sm to count in megabytes (not available in some systems)


Concatenate two or more movie files into one using mencoder

 $ mencoder cd1.avi cd2.avi -o movie.avi -ovc copy -oac copy

— by Janos on Dec. 24, 2011, 3:51 p.m.


  • You can specify as many files as you want on the command line to process them in sequence.
  • -ovc copy simply means to copy the video exactly
  • -oac copy simply means to copy the audio exactly
  • -o movie.avi is the output file, with all the source files concatenated


  • mencoder is usually not a standard package
  • mencoder may be in the same package as mplayer, or maybe not
  • mencoder has binary packages for Linux, Mac and Windows

See the MPlayer homepage for more info: http://www.mplayerhq.hu/


Calculate the average execution time (of short running scripts) with awk

 $ for i in {1..10}; do time some_script.sh; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'

— by Janos on Dec. 21, 2011, 8:50 a.m.


  • The for loop runs some_script.sh 10 times, measuring its execution time with time
  • The stderr of the for loop is redirected to stdout, this is to capture the output of time so we can grep it
  • grep ^real is to get only the lines starting with "real" in the output of time
  • sed is to delete the beginning of the line up to minutes part (in the output of time)
  • For each line, awk adds to the sum, so that in the end it can output the average, which is the total sum, divided by the number of input records (= NR)


The snippet assumes that the running time of some_script.sh is less than 1 minute, otherwise it won't work at all. Depending on your system, the time builtin might work differently. Another alternative is to use the time command /usr/bin/time instead of the bash builtin.


Check the performance of a script by re-running many times while measuring the running time

 $ for i in {1..10}; do time curl http://localhost:8000 >/dev/null; done 2>&1 | grep real

— by Janos on Dec. 17, 2011, 1:49 a.m.


  • {1..10} creates a sequence from 1 to 10, for running the main script 10 times
  • 2>&1 redirects stderr to stdout, this is necessary to capture the "output" of the time builtin


A convenient way to re-run the previous command with sudo

 $ sudo !!

— by Janos on Dec. 14, 2011, 11:26 p.m.


!! (bang bang!) is replaced with the previous command.

You can read more about it and other history expansion commands in man bash in the Event Designators section.


Put an ssh session in the background

 $ ~^z

— by Janos on Dec. 9, 2011, 7:44 p.m.


  • Normally, ^z (read: ctrl-z) pauses the execution of the current foreground task. That doesn't work in an ssh session, because it is intercepted by the remote shell. ~^z is a special escape character for this case, to pause the ssh session and drop you back to the local shell.
  • For all escape characters see ~?
  • The ~ escape character must always follow a newline to be interpreted as special.
  • See man ssh for more details, search for ESCAPE CHARACTERS


Rotate a movie file with mencoder

 $ mencoder video.avi -o rotated-right.avi -oac copy -ovc lavc -vf rotate=1

— by Janos on Dec. 2, 2011, 10:30 p.m.


mencoder is part of mplayer.

Other possible values of the rotate parameter:

  • 0: Rotate by 90 degrees clockwise and flip (default).
  • 1: Rotate by 90 degrees clockwise.
  • 2: Rotate by 90 degrees counterclockwise.
  • 3: Rotate by 90 degrees counterclockwise and flip.