We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

Tags

1

Show files containing "foo" and "bar" and "baz"

 $ grep -l 'baz' $(grep -l 'bar' $(grep -lr 'foo' *) )

— by Anon5eqErEbE on March 16, 2012, 5:37 a.m.

Explanation

Most people familiar with extended regular expressions know you can use the pipe symbol | to represent "or", so to see files containing any of "foo", "bar", or "baz" you could run:

grep -Elr 'foo|bar|baz' *

There is no corresponding symbol representing "and", but you can achieve the same effect by nesting invocations to grep. grep -lr 'foo' * returns a list of filenames in or below the current directory containing "foo". Via the $( ... ) syntax, this list is then operated on by grep -l 'bar', returning a list of filenames containing both 'foo' and 'bar', which finally is operated on by grep -l "baz". The end result is a list of filenames containing all three terms.

Limitations

This one-liner results in scanning files multiple times. You will want to put the term you expect to match the fewest number of times farthest to the right (that is, in the same position as "foo") and the one you expect to match most frequently farthest to the left (the same position as "baz"). This way, you will weed out the largest number of files sooner, making the one-liner complete more quickly.

1

Find in files, recursively

 $ grep -rn 'nameserver' /etc 2>/dev/null

— by atpessoa on Feb. 19, 2012, 8:24 a.m.

Explanation

  • -r make a search recursively;
  • -n print line numbers;
  • -H is not need, is default;

Limitations

  • -i use for case insensitive search;

1

Calculate the total disk space used by a list of files or directories

 $ du -cshx ./a ./b

— by openiduser14 on Feb. 15, 2012, 10:43 p.m.

Explanation

  • -s, --summarize; display only a total for each argument
  • -c, --total; produce a grand total
  • -x, --one-file-system; skip directories on different file systems
  • -h, --human-readable; print sizes in human readable format (e.g., 1K 234M 2G)

1

Create a compressed tar file that rsync will transfer efficiently

 $ GZIP='--rsyncable' tar cvzf bobsbackup.tar.gz /home/bob

— by Anon6y5E4Use on Feb. 15, 2012, 12:24 p.m.

Explanation

rsync works by comparing files on the local and remote machine and only sending those parts of the file that have changed. The normal way compression works, it results in everything after the modification changing, meaning lots of data ends up going over the network when you try to rsync compressed files.

The --rsyncable option to gzip changes the compression scheme so that modifications to the input file only affect the part of the file where they're located. This allows rsync to work its magic.

In this one-liner, the z option to tar calls gzip, which recognizes and uses any options specified in the GZIP environment variable.

Limitations

Using the --rsyncable option results in a slightly larger compressed file.

Not all versions of gzip include this feature - use the --help option to see if it's available on your system.

1

Cut select pages from a pdf file and create a new file from those pages.

 $ ps2pdf -dFirstPage=3 -dLastPage=10 input.pdf output.pdf

— by Anon6y5E4Use on Feb. 15, 2012, 11:08 a.m.

Explanation

ps2pdf is a script that comes with Ghostscript - despite the name, it can accept PDF files as input, not just postscript files.

Limitations

Only a single contiguous range of pages can be specified.

1

Print the lines of file2 that are missing in file1

 $ comm -23 file2 file1

— by Anon9ge6A4uD on Feb. 13, 2012, 8:26 a.m.

Explanation

The POSIX-standard comm utility can do this also.

  • -2 suppresses lines from the second argument (file1)
  • -3 suppresses lines appearing in both files

Limitations

Assumes that file1 and file2 are already sorted. If they are not, you can use process substitution to do so:

comm -23 <(sort file2) <(sort file1)

Process substitution is a bash-specific feature (also available in zsh but with a different syntax).

1

Uses 'at' to run an arbitrary command at a specified time.

 $ echo 'play alarmclock.wav 2>/dev/null' | at 07:30 tomorrow

— by Anon5MAQumYj on Feb. 4, 2012, 11:03 a.m.

Explanation

at 07:30 tomorrow schedules a job for 7:30 AM the next day, running whatever command or script is fed to it as standard input. The format for specifying time and date is rather flexible. http://tinyurl.com/ibmdwat

echo 'play alarmclock.wav 2>/dev/null' | feeds the play alarmclock.wav command to at, while 2>/dev/null causes the text output of play to be thrown away (we are only interested in the alarm sound).

1

Calculate an h index from an EndNote export

 $ MAX=$(NUM=1;cat author.xml |perl -p -e 's/(Times Cited)/\n$1/g'|grep "Times Cited" |perl -p -e 's/^Times Cited:([0-9]*).*$/$1/g'|sort -nr | while read LINE; do if [ $LINE -ge $NUM ]; then echo "$NUM"; fi; NUM=$[$NUM+1]; done;); echo "$MAX"|tail -1

— by openiduser14 on Feb. 4, 2012, 1:06 a.m.

Explanation

EndNote?! I know but sometimes we have windows users as friends

1

Cut select pages from a pdf file and create a new file from those pages.

 $  pdftk input.pdf cat 2-4 7 9-10 output output.pdf

— by mmaki on Feb. 3, 2012, 6:50 a.m.

Explanation

pdftk is the PDF Toolkit

input.pdf is the input file.

cat 2-4 7 9-10 concatenate (combine) pages 2,3,4,7,9,10 of input.pdf.

output output.pdf the resulting pdf file containing the above pages.

1

Re-compress a gzip (.gz) file to a bzip2 (.bz2) file

 $ time gzip -cd file1.tar.gz 2>~/logfile.txt | pv -t -r -b -W -i 5 -B 8M | bzip2 > file1.tar.bz2 2>>~/logfile .txt

— by DAVEB on Feb. 1, 2012, 6:02 p.m.

Explanation

*Requires PV (pipe viewer) if you want to monitor throughput; otherwise you can leave out the pv pipe.

Transparently decompresses an arbitrary .gz file (does not have to be a tar) and re-compresses it to bzip2, which has better compression and error recovery. Echoes error messages to a file named logfile.txt in your home directory.

NOTE: The original .gz file will NOT be deleted. If you want to save space, you will have to delete it manually.

1

Test your hard drive speed

 $ time (dd if=/dev/zero of=zerofile bs=1M count=500;sync);rm zerofile

— by DAVEB on Feb. 1, 2012, 5:35 p.m.

Explanation

Creates a 500MB blank file and times how long it takes to finish writing the entire thing to disk (sync)

time the entire dd + sync operation, and then remove the temporary file

Limitations

Works with Bash; not tested in other environments

1

Recursively remove all empty sub-directories from a directory tree

 $ find . -depth  -type d  -empty -exec rmdir {} \;

— by openiduser16 on Jan. 31, 2012, 11:15 p.m.

Explanation

Recursively remove all empty sub-directories from a directory tree using just find. No need for tac (-depth does that), no need for xargs as the directory contents changes on each call to rmdir. We're not reliant on the rmdir command deleting just empty dirs, -empty does that.

Limitations

Will make many calls to rmdir without using xargs, which bunches commands into one argument string, which is normally useful, but -empty /could/ end up being more efficient since only empty dirs will be passed to rmdir, so possibly fewer executions in most cases (searching / for example).

1

Group count sort a log file

 $ A=$(FILE=/var/log/myfile.log; cat $FILE | perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' | sort -u | while read LINE; do grep "$LINE" $FILE | wc -l | perl -p -e 's/[^0-9]+//g'; echo -e "\t$LINE"; done;);echo "$A"|sort -nr

— by openiduser14 on Jan. 31, 2012, 6:49 p.m.

Explanation

  • SQL: SELECT COUNT(x), x FROM y GROUP BY x ORDER BY count DESC;
  • BASH: a temp var for the last sort: $A=$(
  • the file you want: FILE=/var/log/myfile.log
  • dump the file to a stream: cat $FILE |
  • cut out the bits you want to count: perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' |
  • get a unique list: sort -u |
  • for each line/value in the stream do stuff: while read LINE; do
  • dump all lines matching the current value to an inner stream: grep "$LINE" $FILE |
  • count them: wc -l |
  • clean up the output of wc and drop the value on stdout: perl -p -e 's/[^0-9]+//g';
  • drop the current value to stdout: echo -e "\t$LINE";
  • finish per value operations on the outer stream: done;
  • finish output to the temp var: );
  • dump the temp var to a pipe: echo "$A" |
  • sort the list numerically in reverse: sort -nr

1

Use ghostscript to shrink PDF files

 $ gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

— by openiduser10 on Jan. 31, 2012, 10:43 a.m.

Explanation

Replace input.pdf and output.pdf with the original PDF name and the new compressed version's file name respectively. The key to this is the PDFSETTINGS option which can be tuned for various levels of compression. For scanned text documents, I find the ebook setting works well enough for most purposes but you can experiment with the options below.

  • -dPDFSETTINGS=/screen (screen-view-only quality, 72 dpi images)
  • -dPDFSETTINGS=/ebook (low quality, 150 dpi images)
  • -dPDFSETTINGS=/printer (high quality, 300 dpi images)
  • -dPDFSETTINGS=/prepress (high quality, color preserving, 300 dpi imgs) '-dPDFSETTINGS=/default (almost identical to /screen)'

1

How to find all hard links to a file

 $ find /home -xdev -samefile file1

— by openiduser7 on Jan. 30, 2012, 8:56 p.m.

Explanation

Note: replace /home with the location you want to search

Source: http://linuxcommando.blogspot.com/2008/09/how-to-find-and-delete-all-hard-links.html

1

Concatenate two or more movie files into one using mencoder

 $ mencoder cd1.avi cd2.avi -o movie.avi -ovc copy -oac copy

— by Janos on Dec. 24, 2011, 3:51 p.m.

Explanation

  • You can specify as many files as you want on the command line to process them in sequence.
  • -ovc copy simply means to copy the video exactly
  • -oac copy simply means to copy the audio exactly
  • -o movie.avi is the output file, with all the source files concatenated

Limitations

  • mencoder is usually not a standard package
  • mencoder may be in the same package as mplayer, or maybe not
  • mencoder has binary packages for Linux, Mac and Windows

See the MPlayer homepage for more info: http://www.mplayerhq.hu/

1

Calculate the average execution time (of short running scripts) with awk

 $ for i in {1..10}; do time some_script.sh; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'

— by Janos on Dec. 21, 2011, 8:50 a.m.

Explanation

  • The for loop runs some_script.sh 10 times, measuring its execution time with time
  • The stderr of the for loop is redirected to stdout, this is to capture the output of time so we can grep it
  • grep ^real is to get only the lines starting with "real" in the output of time
  • sed is to delete the beginning of the line up to minutes part (in the output of time)
  • For each line, awk adds to the sum, so that in the end it can output the average, which is the total sum, divided by the number of input records (= NR)

Limitations

The snippet assumes that the running time of some_script.sh is less than 1 minute, otherwise it won't work at all. Depending on your system, the time builtin might work differently. Another alternative is to use the time command /usr/bin/time instead of the bash builtin.

1

Rotate a movie file with mencoder

 $ mencoder video.avi -o rotated-right.avi -oac copy -ovc lavc -vf rotate=1

— by Janos on Dec. 2, 2011, 10:30 p.m.

Explanation

mencoder is part of mplayer.

Other possible values of the rotate parameter:

  • 0: Rotate by 90 degrees clockwise and flip (default).
  • 1: Rotate by 90 degrees clockwise.
  • 2: Rotate by 90 degrees counterclockwise.
  • 3: Rotate by 90 degrees counterclockwise and flip.

1

Rename all files in the current directory by capitalizing the first letter of every word in the filenames

 $ ls | perl -ne 'chomp; $f=$_; tr/A-Z/a-z/; s/(?<![.'"'"'])\b\w/\u$&/g; print qq{mv "$f" "$_"\n}'

— by Janos on Nov. 1, 2011, 12:51 p.m.

Explanation

  • When you pipe something to perl -ne, each input line is substituted into the $_ variable. The chomp, tr///, s/// perl functions in the above command all operate on the $_ variable by default.
  • The tr/A-Z/a-z/ will convert all letters to lowercase.
  • The regular expression pattern (?<![.'])\b\w matches any word character that follows a non-word character except a dot or a single quote.
  • The messy-looking '"'"' in the middle of the regex pattern is not a typo, but necessary for inserting a single quote into the pattern. (The first single quote closes the single quote that started the perl command, followed by a single quote enclosed within double quotes, followed by another single quote to continue the perl command.) We could have used double quotes to enclose the perl command, but then we would have to escape all the dollar signs which would make everything less readable.
  • In the replacement string $& is the letter that was matched, and by putting \u in front it will be converted to uppercase.
  • qq{} in perl works like double quotes, and can make things easier to read, like in this case above when we want to include double quotes within the string to be quoted.
  • After the conversions we print a correctly escaped mv command. Pipe this to bash to really execute the rename with | sh.

Limitations

The above command will not work for files with double quotes in the name, and possibly other corner cases.

1

Remove spaces recursively from all subdirectories of a directory

 $ find /path/to/dir -type d | tac | while read LINE; do target=$(dirname "$LINE")/$(basename "$LINE" | tr -d ' '); echo mv "$LINE" "$target"; done

— by Janos on Sept. 20, 2011, 4:52 p.m.

Explanation

  • find path_to_dir -type d finds all the subdirectories
  • tac reverses the order. This is important to make "leaf" directories come first!
  • target=... stuff constructs the new name, removing spaces from the leaf component and keeping everything before that the same
  • echo mv ... for safety you should run with "echo" first, if the output looks good then remove the "echo" to really perform the rename

Limitations

In UNIX or BSD there is no tac. There you can use tail -r instead.

1

Replace a regexp pattern in many files at once

 $ vi +'bufdo %s/pattern/replacement/g | update' +q $(grep -rl pattern /path/to/dir)

— by Janos on Sept. 15, 2011, 11:50 p.m.

Explanation

  • The inner grep will search recursively in specified directory and print the filenames that contain the pattern.
  • All files will be opened in vi, one buffer per file.
  • The arguments starting with + will be executed as vi commands:
    • bufdo %s/pattern/replacement/g | update = perform the pattern substitution in all files and save the changes
    • q = exit vi

Limitations

The :bufdo command might not be there in old versions of vim.

1

Rename all files in a directory to lowercase names

 $ paste <(ls) <(ls | tr A-Z a-z) | while read OLD NEW; do echo mv -v $OLD $NEW; done

— by Janos on Aug. 5, 2011, 8:57 p.m.

Explanation

  • <(cmd) is the filename of a named pipe (FIFO), where the named pipe is filled by the output of cmd
  • paste puts together the named pipes to form two columns: first column with the original filenames, second column with the lowercased filenames
  • ... | tr abc ABC transforms stdin by replacing any characters that appear in the first set of letters to the second set of letters
  • while read old new; do ...; done for each line it reads the first column into $old and the second column into $new

Limitations

  • Won't work if there are spaces in a filename.

-1

Get a free shell account on a community server

 $ sh <(curl hashbang.sh | gpg)

— by lrvick on March 15, 2015, 9:49 a.m.

Explanation

Bash process substitution which curls the website 'hashbang.sh' and executes the shell script embedded in the page.

This is obviously not the most secure way to run something like this, and we will scold you if you try.

The smarter way would be:

Download locally over SSL

curl https://hashbang.sh >> hashbang.sh

Verify integrity with GPG (if available):

gpg --recv-keys 0xD2C4C74D8FAA96F5
gpg --verify hashbang.sh

Inspect source code:

less hashbang.sh

Run:

chmod +x hashbang.sh
./hashbang.sh

-1

Run a local shell script on a remote server without copying it there

 $ ssh user@server bash < /path/to/local/script.sh

— by Janos on June 21, 2012, 12:06 a.m.

Explanation

Yes this is almost trivial: a simple input redirection, from a local shell script to be executed by bash on the remote server.

The important point being, if you have a complex and very long chain of commands to run on a remote server, it is better to put the commands in a shell script, break the long one-liner to multiple lines for readability and easier debugging.

Replace bash accordingly depending on the language of the script, for example for python:

ssh user@server python < /path/to/local/script.py

0

Random Git Commit

 $ git commit -m "$(w3m whatthecommit.com | head -n 1)"

— by Jab2870 on Jan. 5, 2018, 4:55 p.m.

Explanation

This will commit a message pulled from What the Commit.

-m allows you to provide the commit message without entering your editor

w3m is a terminal based web browser. We basically use it to strip out all of the html tags

head -n 1 will grab only the first line

Limitations

This requires you to have w3m installed