We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

Tags

1

Cut select pages from a pdf file and create a new file from those pages.

 $ ps2pdf -dFirstPage=3 -dLastPage=10 input.pdf output.pdf

— by Anon6y5E4Use on Feb. 15, 2012, 11:08 a.m.

Explanation

ps2pdf is a script that comes with Ghostscript - despite the name, it can accept PDF files as input, not just postscript files.

Limitations

Only a single contiguous range of pages can be specified.

1

View a file with line numbers

 $ cat -n /path/to/file | less

— by openiduser28 on Feb. 13, 2012, 5:14 p.m.

Explanation

cat -n will number all lines of a file.

Limitations

It will add some white spaces as padding.

1

Print the lines of file2 that are missing in file1

 $ comm -23 file2 file1

— by Anon9ge6A4uD on Feb. 13, 2012, 8:26 a.m.

Explanation

The POSIX-standard comm utility can do this also.

  • -2 suppresses lines from the second argument (file1)
  • -3 suppresses lines appearing in both files

Limitations

Assumes that file1 and file2 are already sorted. If they are not, you can use process substitution to do so:

comm -23 <(sort file2) <(sort file1)

Process substitution is a bash-specific feature (also available in zsh but with a different syntax).

1

Uses 'at' to run an arbitrary command at a specified time.

 $ echo 'play alarmclock.wav 2>/dev/null' | at 07:30 tomorrow

— by Anon5MAQumYj on Feb. 4, 2012, 11:03 a.m.

Explanation

at 07:30 tomorrow schedules a job for 7:30 AM the next day, running whatever command or script is fed to it as standard input. The format for specifying time and date is rather flexible. http://tinyurl.com/ibmdwat

echo 'play alarmclock.wav 2>/dev/null' | feeds the play alarmclock.wav command to at, while 2>/dev/null causes the text output of play to be thrown away (we are only interested in the alarm sound).

1

Calculate an h index from an EndNote export

 $ MAX=$(NUM=1;cat author.xml |perl -p -e 's/(Times Cited)/\n$1/g'|grep "Times Cited" |perl -p -e 's/^Times Cited:([0-9]*).*$/$1/g'|sort -nr | while read LINE; do if [ $LINE -ge $NUM ]; then echo "$NUM"; fi; NUM=$[$NUM+1]; done;); echo "$MAX"|tail -1

— by openiduser14 on Feb. 4, 2012, 1:06 a.m.

Explanation

EndNote?! I know but sometimes we have windows users as friends

1

Cut select pages from a pdf file and create a new file from those pages.

 $  pdftk input.pdf cat 2-4 7 9-10 output output.pdf

— by mmaki on Feb. 3, 2012, 6:50 a.m.

Explanation

pdftk is the PDF Toolkit

input.pdf is the input file.

cat 2-4 7 9-10 concatenate (combine) pages 2,3,4,7,9,10 of input.pdf.

output output.pdf the resulting pdf file containing the above pages.

1

Re-compress a gzip (.gz) file to a bzip2 (.bz2) file

 $ time gzip -cd file1.tar.gz 2>~/logfile.txt | pv -t -r -b -W -i 5 -B 8M | bzip2 > file1.tar.bz2 2>>~/logfile .txt

— by DAVEB on Feb. 1, 2012, 6:02 p.m.

Explanation

*Requires PV (pipe viewer) if you want to monitor throughput; otherwise you can leave out the pv pipe.

Transparently decompresses an arbitrary .gz file (does not have to be a tar) and re-compresses it to bzip2, which has better compression and error recovery. Echoes error messages to a file named logfile.txt in your home directory.

NOTE: The original .gz file will NOT be deleted. If you want to save space, you will have to delete it manually.

1

Test your hard drive speed

 $ time (dd if=/dev/zero of=zerofile bs=1M count=500;sync);rm zerofile

— by DAVEB on Feb. 1, 2012, 5:35 p.m.

Explanation

Creates a 500MB blank file and times how long it takes to finish writing the entire thing to disk (sync)

time the entire dd + sync operation, and then remove the temporary file

Limitations

Works with Bash; not tested in other environments

1

Recursively remove all empty sub-directories from a directory tree

 $ find . -depth  -type d  -empty -exec rmdir {} \;

— by openiduser16 on Jan. 31, 2012, 11:15 p.m.

Explanation

Recursively remove all empty sub-directories from a directory tree using just find. No need for tac (-depth does that), no need for xargs as the directory contents changes on each call to rmdir. We're not reliant on the rmdir command deleting just empty dirs, -empty does that.

Limitations

Will make many calls to rmdir without using xargs, which bunches commands into one argument string, which is normally useful, but -empty /could/ end up being more efficient since only empty dirs will be passed to rmdir, so possibly fewer executions in most cases (searching / for example).

1

Group count sort a log file

 $ A=$(FILE=/var/log/myfile.log; cat $FILE | perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' | sort -u | while read LINE; do grep "$LINE" $FILE | wc -l | perl -p -e 's/[^0-9]+//g'; echo -e "\t$LINE"; done;);echo "$A"|sort -nr

— by openiduser14 on Jan. 31, 2012, 6:49 p.m.

Explanation

  • SQL: SELECT COUNT(x), x FROM y GROUP BY x ORDER BY count DESC;
  • BASH: a temp var for the last sort: $A=$(
  • the file you want: FILE=/var/log/myfile.log
  • dump the file to a stream: cat $FILE |
  • cut out the bits you want to count: perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' |
  • get a unique list: sort -u |
  • for each line/value in the stream do stuff: while read LINE; do
  • dump all lines matching the current value to an inner stream: grep "$LINE" $FILE |
  • count them: wc -l |
  • clean up the output of wc and drop the value on stdout: perl -p -e 's/[^0-9]+//g';
  • drop the current value to stdout: echo -e "\t$LINE";
  • finish per value operations on the outer stream: done;
  • finish output to the temp var: );
  • dump the temp var to a pipe: echo "$A" |
  • sort the list numerically in reverse: sort -nr

1

Use ghostscript to shrink PDF files

 $ gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

— by openiduser10 on Jan. 31, 2012, 10:43 a.m.

Explanation

Replace input.pdf and output.pdf with the original PDF name and the new compressed version's file name respectively. The key to this is the PDFSETTINGS option which can be tuned for various levels of compression. For scanned text documents, I find the ebook setting works well enough for most purposes but you can experiment with the options below.

  • -dPDFSETTINGS=/screen (screen-view-only quality, 72 dpi images)
  • -dPDFSETTINGS=/ebook (low quality, 150 dpi images)
  • -dPDFSETTINGS=/printer (high quality, 300 dpi images)
  • -dPDFSETTINGS=/prepress (high quality, color preserving, 300 dpi imgs) '-dPDFSETTINGS=/default (almost identical to /screen)'

1

How to find all hard links to a file

 $ find /home -xdev -samefile file1

— by openiduser7 on Jan. 30, 2012, 8:56 p.m.

Explanation

Note: replace /home with the location you want to search

Source: http://linuxcommando.blogspot.com/2008/09/how-to-find-and-delete-all-hard-links.html

1

Find all the unique 4-letter words in a text

 $ cat ipsum.txt | perl -ne 'print map("$_\n", m/\w+/g);' | tr A-Z a-z | sort | uniq | awk 'length($1) == 4 {print}'

— by Janos on Jan. 29, 2012, 10:28 p.m.

Explanation

  • The perl regex pattern m/\w+/g will match consecutive non-word characters, resulting in a list of all the words in the source string
  • map("$_\n", @list) transforms a list, appending a new-line at the end of each element
  • tr A-Z a-z transforms uppercase letters to lowercase
  • In awk, length($1) == 4 {print} means: for lines matching the filter condition "length of the first column is 4", execute the block of code, in this case simply print

1

Concatenate two or more movie files into one using mencoder

 $ mencoder cd1.avi cd2.avi -o movie.avi -ovc copy -oac copy

— by Janos on Dec. 24, 2011, 3:51 p.m.

Explanation

  • You can specify as many files as you want on the command line to process them in sequence.
  • -ovc copy simply means to copy the video exactly
  • -oac copy simply means to copy the audio exactly
  • -o movie.avi is the output file, with all the source files concatenated

Limitations

  • mencoder is usually not a standard package
  • mencoder may be in the same package as mplayer, or maybe not
  • mencoder has binary packages for Linux, Mac and Windows

See the MPlayer homepage for more info: http://www.mplayerhq.hu/

1

Calculate the average execution time (of short running scripts) with awk

 $ for i in {1..10}; do time some_script.sh; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'

— by Janos on Dec. 21, 2011, 8:50 a.m.

Explanation

  • The for loop runs some_script.sh 10 times, measuring its execution time with time
  • The stderr of the for loop is redirected to stdout, this is to capture the output of time so we can grep it
  • grep ^real is to get only the lines starting with "real" in the output of time
  • sed is to delete the beginning of the line up to minutes part (in the output of time)
  • For each line, awk adds to the sum, so that in the end it can output the average, which is the total sum, divided by the number of input records (= NR)

Limitations

The snippet assumes that the running time of some_script.sh is less than 1 minute, otherwise it won't work at all. Depending on your system, the time builtin might work differently. Another alternative is to use the time command /usr/bin/time instead of the bash builtin.

1

Rotate a movie file with mencoder

 $ mencoder video.avi -o rotated-right.avi -oac copy -ovc lavc -vf rotate=1

— by Janos on Dec. 2, 2011, 10:30 p.m.

Explanation

mencoder is part of mplayer.

Other possible values of the rotate parameter:

  • 0: Rotate by 90 degrees clockwise and flip (default).
  • 1: Rotate by 90 degrees clockwise.
  • 2: Rotate by 90 degrees counterclockwise.
  • 3: Rotate by 90 degrees counterclockwise and flip.

1

View specific column of data from a large file with long lines

 $ cat /tmp/log.data |colrm 1 155|colrm 60 300

— by versorge on Oct. 4, 2011, 9:55 p.m.

Explanation

  • cat: prints the file to standard output
  • colrm: removes selected columns from a file

1

Replace a regexp pattern in many files at once

 $ vi +'bufdo %s/pattern/replacement/g | update' +q $(grep -rl pattern /path/to/dir)

— by Janos on Sept. 15, 2011, 11:50 p.m.

Explanation

  • The inner grep will search recursively in specified directory and print the filenames that contain the pattern.
  • All files will be opened in vi, one buffer per file.
  • The arguments starting with + will be executed as vi commands:
    • bufdo %s/pattern/replacement/g | update = perform the pattern substitution in all files and save the changes
    • q = exit vi

Limitations

The :bufdo command might not be there in old versions of vim.

-1

Delete all untagged Docker images

 $ docker rmi $(docker images -f "dangling=true" -q)

— by stefanobaghino on April 27, 2018, 2:50 p.m.

Explanation

docker images outputs all images currently available. By specifying -f "dangling=true" we restrict the list to "dangling" images (i.e. untagged). By specifying the -q option we use quiet mode, which limits the output to the images hash, which is the directly fed into docker rmi, which removes the images with the corresponding hashes.

-1

Get a free shell account on a community server

 $ sh <(curl hashbang.sh | gpg)

— by lrvick on March 15, 2015, 9:49 a.m.

Explanation

Bash process substitution which curls the website 'hashbang.sh' and executes the shell script embedded in the page.

This is obviously not the most secure way to run something like this, and we will scold you if you try.

The smarter way would be:

Download locally over SSL

curl https://hashbang.sh >> hashbang.sh

Verify integrity with GPG (if available):

gpg --recv-keys 0xD2C4C74D8FAA96F5
gpg --verify hashbang.sh

Inspect source code:

less hashbang.sh

Run:

chmod +x hashbang.sh
./hashbang.sh

-1

Run a local shell script on a remote server without copying it there

 $ ssh user@server bash < /path/to/local/script.sh

— by Janos on June 21, 2012, 12:06 a.m.

Explanation

Yes this is almost trivial: a simple input redirection, from a local shell script to be executed by bash on the remote server.

The important point being, if you have a complex and very long chain of commands to run on a remote server, it is better to put the commands in a shell script, break the long one-liner to multiple lines for readability and easier debugging.

Replace bash accordingly depending on the language of the script, for example for python:

ssh user@server python < /path/to/local/script.py

0

Search man pages and present a PDF

 $ man -k . | awk '{ print $1 " " $2 }' | dmenu -i -p man | awk '{ print $2 " " $1 }' | tr -d '()' | xargs man -t | ps2pdf - - | zathura -

— by Jab2870 on Dec. 18, 2018, 11:31 a.m.

Explanation

This uses dmenu to search through your man pages then produces a pdf for the one you selected

  1. man -k . lists all man pages
  2. awk '{ print $1 " " $2 }' prints the first column, a space then the second column
    • This results in lines like this: curl (1)
  3. dmenu -i -p man takes a list fron stdin and lets you choose one. It returns what you chose
    • You can swap dmenu for somethign like rofi if required
  4. awk '{ print $2 " " $1 }' puts the second column first
    • The output is now like (1) curl
  5. tr -d '()' removes the brackets
  6. xargs man -t puts the result at the end of the command man -t
    • This makes the command something like man -t 1 curl
    • the -t flag makes man use troff to format the page
  7. ps2pdf - - produces a PDF from the postscritpt output by the previous command
  8. zathura - is a pdf reafer that can read STDIN

Limitations

You will need a PDF viewer that can read from STDIN

You will need ps2pdf installed which is part of ghostscript

You will need dmenu or a dmenu compatible program installed.

Almost all systems will already have xargs, tr troff, awk installed.

0

Random 6-digit number

 $ python -c 'import random; print(random.randint(0,1000000-1))'

— by johntellsall on Sept. 19, 2018, 10:42 p.m.

Explanation

Stackoverflow has a dozen different ways to generate numbers, all of which fall apart after 3-4 digits. This solution requires Python to be installed, but is simple and direct.

Limitations

Requires Python, it's not a pure-Bash solution

0

Very fast history search with Ripgrep

 $ rh() { rg "$1" ~/.bash_history }

— by johntellsall on Sept. 18, 2018, 7 p.m.

Explanation

I search my history a lot. At 65k lines, this can take a while. Recently I've installed Ripgrep, which is much faster than my previous favorite Ack, and faster than good old Grep/Egrep.

After defining the above function, history searches are much faster!

Example: I forgot how to hit my service's endpoint. Instead of grepping through the Bash history file, I use my new Ripgrep-powered function:

rh curl.*health

Limitations

This function only works if you have Ripgrep installed

It's not really needed unless you have "infinite history" turned on.

0

While loop to pretty print system load (1, 5 & 15 minutes)

 $ while :; do date; awk '{printf "1 minute load: %.2f\n", $1; printf "5 minute load: %.2f\n", $2; printf "15 minute load: %.2f\n", $3}' /proc/loadavg; sleep 3; done

— by Janos on Sept. 5, 2018, 8:41 p.m.

Explanation

while :; do ...; done is an infinite loop. You can interrupt it with Control-c.

The file /proc/loadavg contains a line like this:

0.01 0.04 0.07 1/917 25383

Where the first 3 columns represent the 1, 5, 15 minute loads, respectively.

In the infinite loop we print the current date, then the load values nicely formatted (ignore the other values), then sleep for 3 seconds, and start again.

Limitations

/proc/loadavg is only available in Linux.