We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

Tags

1

Find in files, recursively

 $ grep -rn 'nameserver' /etc 2>/dev/null

— by atpessoa on Feb. 19, 2012, 8:24 a.m.

Explanation

  • -r make a search recursively;
  • -n print line numbers;
  • -H is not need, is default;

Limitations

  • -i use for case insensitive search;

2

How to send an http POST to a website with a file input field

 $ curl -L -v -F "value=@myfile" "http://domain.tld/whatever.php"

— by openiduser14 on Feb. 15, 2012, 11:26 p.m.

Explanation

  • curl read "man curl" if you need to info like using cookies,etc. you can also use wget
  • -L follow redirects
  • -v be verbose
  • -F an input field
  • value= the name of the input field
  • @myfile the file you want uploaded
  • "http://domain.tld/whatever.php" the url that will take the file

1

Calculate the total disk space used by a list of files or directories

 $ du -cshx ./a ./b

— by openiduser14 on Feb. 15, 2012, 10:43 p.m.

Explanation

  • -s, --summarize; display only a total for each argument
  • -c, --total; produce a grand total
  • -x, --one-file-system; skip directories on different file systems
  • -h, --human-readable; print sizes in human readable format (e.g., 1K 234M 2G)

1

Create a compressed tar file that rsync will transfer efficiently

 $ GZIP='--rsyncable' tar cvzf bobsbackup.tar.gz /home/bob

— by Anon6y5E4Use on Feb. 15, 2012, 12:24 p.m.

Explanation

rsync works by comparing files on the local and remote machine and only sending those parts of the file that have changed. The normal way compression works, it results in everything after the modification changing, meaning lots of data ends up going over the network when you try to rsync compressed files.

The --rsyncable option to gzip changes the compression scheme so that modifications to the input file only affect the part of the file where they're located. This allows rsync to work its magic.

In this one-liner, the z option to tar calls gzip, which recognizes and uses any options specified in the GZIP environment variable.

Limitations

Using the --rsyncable option results in a slightly larger compressed file.

Not all versions of gzip include this feature - use the --help option to see if it's available on your system.

1

Cut select pages from a pdf file and create a new file from those pages.

 $ ps2pdf -dFirstPage=3 -dLastPage=10 input.pdf output.pdf

— by Anon6y5E4Use on Feb. 15, 2012, 11:08 a.m.

Explanation

ps2pdf is a script that comes with Ghostscript - despite the name, it can accept PDF files as input, not just postscript files.

Limitations

Only a single contiguous range of pages can be specified.

0

Calculate the total disk space used by a list of files or directories

 $ du -c

— by openiduser30 on Feb. 14, 2012, 1:34 a.m.

Explanation

-c option of du prints the total size of the arguments

0

View a file with line numbers

 $ cat -n /path/to/file | less

— by openiduser28 on Feb. 13, 2012, 5:14 p.m.

Explanation

cat -n will number all lines of a file.

Limitations

It will add some white spaces as padding.

1

Print the lines of file2 that are missing in file1

 $ comm -23 file2 file1

— by Anon9ge6A4uD on Feb. 13, 2012, 8:26 a.m.

Explanation

The POSIX-standard comm utility can do this also.

  • -2 suppresses lines from the second argument (file1)
  • -3 suppresses lines appearing in both files

Limitations

Assumes that file1 and file2 are already sorted. If they are not, you can use process substitution to do so:

comm -23 <(sort file2) <(sort file1)

Process substitution is a bash-specific feature (also available in zsh but with a different syntax).

0

Print the lines of file2 that are missing in file1

 $ grep -vxFf file1 file2

— by Janos on Feb. 8, 2012, 2:42 p.m.

Explanation

  • -f is to specify a file with the list of patterns: file1
  • -F is to treat the patterns fixed strings, without using regular expressions
  • -x is to match exactly the whole line
  • -v is to select non-matching lines

The result is effectively the same as:

diff file1 file2 | grep '^>' | sed -e s/..//

Limitations

The flags of grep might work differently depending on the system. So yeah you might prefer the second way which should work everywhere. Nonetheless the various of flags of grep are interesting.

1

Uses 'at' to run an arbitrary command at a specified time.

 $ echo 'play alarmclock.wav 2>/dev/null' | at 07:30 tomorrow

— by Anon5MAQumYj on Feb. 4, 2012, 11:03 a.m.

Explanation

at 07:30 tomorrow schedules a job for 7:30 AM the next day, running whatever command or script is fed to it as standard input. The format for specifying time and date is rather flexible. http://tinyurl.com/ibmdwat

echo 'play alarmclock.wav 2>/dev/null' | feeds the play alarmclock.wav command to at, while 2>/dev/null causes the text output of play to be thrown away (we are only interested in the alarm sound).

1

Calculate an h index from an EndNote export

 $ MAX=$(NUM=1;cat author.xml |perl -p -e 's/(Times Cited)/\n$1/g'|grep "Times Cited" |perl -p -e 's/^Times Cited:([0-9]*).*$/$1/g'|sort -nr | while read LINE; do if [ $LINE -ge $NUM ]; then echo "$NUM"; fi; NUM=$[$NUM+1]; done;); echo "$MAX"|tail -1

— by openiduser14 on Feb. 4, 2012, 1:06 a.m.

Explanation

EndNote?! I know but sometimes we have windows users as friends

1

Cut select pages from a pdf file and create a new file from those pages.

 $  pdftk input.pdf cat 2-4 7 9-10 output output.pdf

— by mmaki on Feb. 3, 2012, 6:50 a.m.

Explanation

pdftk is the PDF Toolkit

input.pdf is the input file.

cat 2-4 7 9-10 concatenate (combine) pages 2,3,4,7,9,10 of input.pdf.

output output.pdf the resulting pdf file containing the above pages.

0

Find in files, recursively

 $ find /etc -type f -print0 2>/dev/null | xargs -0 grep --color=AUTO -Hn 'nameserver' 2>/dev/null

— by openiduser21 on Feb. 2, 2012, 7:32 p.m.

Explanation

In the example above, find and display every file in /etc containing the string nameserver with the corresponding line, including line number, sample output:

/etc/ppp/ip-up.d/0dns-up:9:# Rev. Dec 22 1999 to put dynamic nameservers last.

/etc/ppp/ip-up.d/0dns-up:23:# nameservers given by the administrator. Those for which 'Dynamic' was chosen

/etc/ppp/ip-up.d/0dns-up:24:# are empty. 0dns-up fills in the nameservers when pppd gets them from the

/etc/ppp/ip-up.d/0dns-up:26:# 'search' or 'domain' directives or additional nameservers. Read the

/etc/ppp/ip-up.d/0dns-up:77:# nameserver lines to the temp file.

1

Re-compress a gzip (.gz) file to a bzip2 (.bz2) file

 $ time gzip -cd file1.tar.gz 2>~/logfile.txt | pv -t -r -b -W -i 5 -B 8M | bzip2 > file1.tar.bz2 2>>~/logfile .txt

— by DAVEB on Feb. 1, 2012, 6:02 p.m.

Explanation

*Requires PV (pipe viewer) if you want to monitor throughput; otherwise you can leave out the pv pipe.

Transparently decompresses an arbitrary .gz file (does not have to be a tar) and re-compresses it to bzip2, which has better compression and error recovery. Echoes error messages to a file named logfile.txt in your home directory.

NOTE: The original .gz file will NOT be deleted. If you want to save space, you will have to delete it manually.

1

Test your hard drive speed

 $ time (dd if=/dev/zero of=zerofile bs=1M count=500;sync);rm zerofile

— by DAVEB on Feb. 1, 2012, 5:35 p.m.

Explanation

Creates a 500MB blank file and times how long it takes to finish writing the entire thing to disk (sync)

time the entire dd + sync operation, and then remove the temporary file

Limitations

Works with Bash; not tested in other environments

1

Recursively remove all empty sub-directories from a directory tree

 $ find . -depth  -type d  -empty -exec rmdir {} \;

— by openiduser16 on Jan. 31, 2012, 11:15 p.m.

Explanation

Recursively remove all empty sub-directories from a directory tree using just find. No need for tac (-depth does that), no need for xargs as the directory contents changes on each call to rmdir. We're not reliant on the rmdir command deleting just empty dirs, -empty does that.

Limitations

Will make many calls to rmdir without using xargs, which bunches commands into one argument string, which is normally useful, but -empty /could/ end up being more efficient since only empty dirs will be passed to rmdir, so possibly fewer executions in most cases (searching / for example).

1

Group count sort a log file

 $ A=$(FILE=/var/log/myfile.log; cat $FILE | perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' | sort -u | while read LINE; do grep "$LINE" $FILE | wc -l | perl -p -e 's/[^0-9]+//g'; echo -e "\t$LINE"; done;);echo "$A"|sort -nr

— by openiduser14 on Jan. 31, 2012, 6:49 p.m.

Explanation

  • SQL: SELECT COUNT(x), x FROM y GROUP BY x ORDER BY count DESC;
  • BASH: a temp var for the last sort: $A=$(
  • the file you want: FILE=/var/log/myfile.log
  • dump the file to a stream: cat $FILE |
  • cut out the bits you want to count: perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' |
  • get a unique list: sort -u |
  • for each line/value in the stream do stuff: while read LINE; do
  • dump all lines matching the current value to an inner stream: grep "$LINE" $FILE |
  • count them: wc -l |
  • clean up the output of wc and drop the value on stdout: perl -p -e 's/[^0-9]+//g';
  • drop the current value to stdout: echo -e "\t$LINE";
  • finish per value operations on the outer stream: done;
  • finish output to the temp var: );
  • dump the temp var to a pipe: echo "$A" |
  • sort the list numerically in reverse: sort -nr

1

Use ghostscript to shrink PDF files

 $ gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

— by openiduser10 on Jan. 31, 2012, 10:43 a.m.

Explanation

Replace input.pdf and output.pdf with the original PDF name and the new compressed version's file name respectively. The key to this is the PDFSETTINGS option which can be tuned for various levels of compression. For scanned text documents, I find the ebook setting works well enough for most purposes but you can experiment with the options below.

  • -dPDFSETTINGS=/screen (screen-view-only quality, 72 dpi images)
  • -dPDFSETTINGS=/ebook (low quality, 150 dpi images)
  • -dPDFSETTINGS=/printer (high quality, 300 dpi images)
  • -dPDFSETTINGS=/prepress (high quality, color preserving, 300 dpi imgs) '-dPDFSETTINGS=/default (almost identical to /screen)'

1

How to find all hard links to a file

 $ find /home -xdev -samefile file1

— by openiduser7 on Jan. 30, 2012, 8:56 p.m.

Explanation

Note: replace /home with the location you want to search

Source: http://linuxcommando.blogspot.com/2008/09/how-to-find-and-delete-all-hard-links.html

0

Find all the unique 4-letter words in a text

 $ cat ipsum.txt | perl -ne 'print map("$_\n", m/\w+/g);' | tr A-Z a-z | sort | uniq | awk 'length($1) == 4 {print}'

— by Janos on Jan. 29, 2012, 10:28 p.m.

Explanation

  • The perl regex pattern m/\w+/g will match consecutive non-word characters, resulting in a list of all the words in the source string
  • map("$_\n", @list) transforms a list, appending a new-line at the end of each element
  • tr A-Z a-z transforms uppercase letters to lowercase
  • In awk, length($1) == 4 {print} means: for lines matching the filter condition "length of the first column is 4", execute the block of code, in this case simply print

0

Concatenate PDF files using GhostScript

 $ gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=output.pdf -dBATCH file1.pdf file2.pdf file3.pdf

— by Janos on Jan. 26, 2012, 8:51 a.m.

Explanation

Free PDF editing software might become more and more available, but this method has been working for a long time, and likely will continue to do so.

Limitations

It may not work with all PDFs, for example files that don't conform to Adobe's published PDF specification.

0

Format text with long lines to text with fixed width

 $ fmt -s -w80 file.txt

— by Janos on Jan. 22, 2012, 10:08 a.m.

Explanation

  • It will break lines longer than 80 characters at appropriate white spaces to make them less than 80 characters long.
  • The -s flag will collapse multiple consecutive white spaces into one, or at the end of a sentence a double space.

0

Come back quickly to the current directory after doing some temporary work somewhere else

 $ pushd /some/where/else; work; cd /somewhere; work; cd /another/place; popd

— by Janos on Jan. 15, 2012, 11:12 p.m.

Explanation

  • pushd, popd and dirs are bash builtins, you can read about them with help dirs
  • bash keeps a stack of "remembered" directories, and this stack can be manipulated with the pushd and popd builtins, and displayed with the dirs builtin
  • pushd will put the current directory on top of the directory stack. So, if you need to change to a different directory temporarily and you know that eventually you will want to come back to where you are, it is better to change directory with pushd instead of cd. While working on the temporary task you can change directories with cd several times, and in the end when you want to come back to where you started from, you can simply do popd.

0

Export a git project to a directory

 $ git archive master | tar x -C /path/to/dir/to/export

— by Janos on Jan. 12, 2012, 11:04 a.m.

Explanation

The git archive command basically creates a tar file. The one-liner is to create a directory instead, without an intermediate tar file. The tar command above will untar the output of git archive into the directory specified with the -C flag. The directory must exist before you run this command.

0

Delete all tables of a mysql database

 $ mysql --defaults-file=my.cnf -e 'show tables' | while read t; do mysql --defaults-file=my.cnt  -e 'drop table '$t; done

— by Janos on Jan. 8, 2012, 7:53 a.m.

Explanation

If you have a root access to the database, a drop database + create database is easiest. This script is useful in situations where you don't have root access to the database.

First prepare a file my.cnf to store database credentials so you don't have to enter on the command line:

[client]

database=dbname

user=dbuser

password=dbpass

host=dbhost

Make sure to protect this file with chmod go-rwx.

The one-liner will execute show tables on the database to list all tables. Then the while loop reads each table name line by line and executes a drop table command.

Limitations

The above solution is lazy, because not all lines in the output of show tables are table names, so you will see errors when you run it. But hey, shell scripts are meant to be lazy!