We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

Tags

1

Redirect stdout to a file you don't have write permission on

 $ echo hello | sudo tee -a /path/to/file

— by Janos on Sept. 11, 2012, 9:24 a.m.

Explanation

  • The tee command copies standard input to standard output, making a copy in zero or more files.
  • If the -a flag is specified it appends instead of overwriting.
  • Calling tee with sudo makes it possible to write to files the current user has no permission to but root does.

1

`tail -f` a file until text is seen

 $ tail -f /path/to/file.log | sed '/^Finished: SUCCESS$/ q'

— by Janos on Aug. 22, 2012, 8:29 a.m.

Explanation

tail -f until this exact line is seen:

Finished: SUCCESS

The exit condition does not have to be an exact line, it could just well be a simple pattern:

... | sed '/Finished/ q'

1

Recording SSH sessions

 $ ssh -l USER HOST | tee -a /path/to/file

— by LeandroToledo on Aug. 15, 2012, 5:04 p.m.

Explanation

tee is a command which displays or pipes the output of a command and copies it into a file or a variable.

The -a option appends the output to the end of file instead of writing over it.

You can also create an alias in ~/.bashrc to record your session when using ssh:

function sshlog () { \ssh $@ 2>&1 | tee -a $(date +%Y%m%d).log; }
alias ssh=sshlog

1

Record audio from microphone or sound input from the console

 $ sox -t ossdsp -w -s -r 44100 -c 2 /dev/dsp -t raw - | lame -x -m s - File.mp3

— by Kleper on July 28, 2012, 8:55 p.m.

Explanation

sox is a software that lets you connect directly to the sound card and send what passes for it in raw format and the system memory through the concatenation of the linux command we can make real-time audio processing is generated by licks and be converted to mp3.

Limitations

Requires a plugin for alsa oss to run on modern distributions.

1

Use vim to pretty-print code with syntax highlighting

 $ vim +'hardcopy > output.ps' +q style.css 

— by Janos on July 21, 2012, 12:13 a.m.

Explanation

If you have syntax highlighting properly setup in vim, this command will pretty-print the specified file with syntax highlighting to output.ps.

If you prefer PDF, you can convert using ps2pdf output.ps.

1

Log and verify files received via FTP

 $ for i in $(cat /var/log/vsftpd.log | grep $DATE_TIME | grep UPLOAD | grep OK); do ls /FTP/HOME/$i >> /dev/null 2> \&1; if \[ $? = 0 \]; then echo "$i" >> $FILES_OK_UPLOADS.log; else  echo "$DATE ERROR: File $i not found" >> $FTP_FILES_NOTOK_$DATE_TIME.log; fi; done

— by dark_axl on July 10, 2012, 8:54 p.m.

Explanation

This one-liner checks and validates the received files via ftp, and generates a log of these files. To have a record of files received and be able to process, based on the successful transfer and the existence of the files.

1

Edit the Gimp launcher file to disable the splash screen

 $ printf '%s\n' ',s/^Exec=[^ ]*/& -s/' w q | ed /usr/share/applications/gimp.desktop

— by Anon8yhYNaVe on July 1, 2012, 12:57 a.m.

Explanation

sed is designed for editing streams - editing files is what ed is for! You can get consistent behavior on any UNIX platform with the above one-liner.

The printf command sends a series of editing commands to ed, each separated by a newline. In this case, the substitution command ,s/^Exec=[^ ]*/& -s/ is nearly the same as in sed, appending a space and a -s to the line starting with Exec=. The only difference is the comma at the beginning designating the lines to operate on. This is shorthand for 1,$, which tells ed to apply the command to the first through the last lines (i.e., the entire file). w tells ed to write the file, and q to quit.

1

Faster disk imaging with dd

 $ dd if=/dev/sda bs=$(hdparm -i /dev/sda | grep BuffSize | cut -d ' ' -f 3 | tr [:lower:] [:upper:] | tr -d BUFFSIZE=,) conv=noerror | dd of=image.dd conv=noerror

— by austindcc on May 19, 2012, 3:28 a.m.

Explanation

GNU dd (disk dump) copies any block device to another block device or file. It's really useful for disk cloning, but its usual invocation isn't as fast as it could be. These settings, or settings like them, often improve copying speed by more than double.

  • Piping the input of dd into the output of another instance seems to always improve copying speed.
  • /dev/sda refers to your input device, which may vary. Check yours with fdisk -l.
  • image.dd refers to the copy stored in the current working directory. You can also use another block device, such as /dev/sdb. WARNING! Be sure you know what you set the output file to! A mistake here could do irreparable damage to your system.
  • The entire hdparm subshell sets dd's input block size to the buffer size of the source medium. This also usually improves copy speed, but may need adjustment (see limitations below).
  • conv=noerror tells dd to ignore read errors.

Check dd's progress with: kill -USR1 $(pidof dd)

Limitations

The hdparm subshell is not appropriate for block devices without buffers, like flash drives. Try block sizes from 512 bytes to 1 or 2MiB to get the best speed. dd usually requires root privileges to run, because it is very powerful and dangerous, and will not prompt when overwriting!. If you're not careful where dd outputs, you may permanently destroy all or part of your system. Use with care; double-check all parameters, especially the of file/device!

1

Convert a decimal number to octal, hexadecimal, binary, or anything

 $ bc <<< 'obase=2;1234'

— by openiduser43 on April 12, 2012, 8 p.m.

Explanation

<<< word is here-string syntax, a variant of here-documents.

1

Remove carriage return '\r' character in many files, without looping and intermediary files

 $ vi +'bufdo set ff=unix' +'bufdo %s/^M$//' +q file1 file2 file3

— by Janos on March 30, 2012, 3:50 p.m.

Explanation

  • The arguments starting with + are commands in vi that will be executed
  • set ff=unix is a shortcut for set fileformat=unix and means to use "unix" file format, i.e. without the carriage return \r character.
  • %s/^M$// is a pattern substitution for all lines in the entire buffer, the pattern is "carriage return at end of the line", where ^M is not two literal characters but actually one, to enter it on the command line press Ctrl followed by Enter/Return
  • bufdo means to run command in all buffers (each file is opened in a separate buffer)
  • q is to quit vi

Note: the set ff=unix is necessary, otherwise the pattern substitution will not do anything if all the lines end with \r = the file is in dos format, because in that case the line ending character will not be considered as part of the line.

Note: if a shell-script has "accidentally" some carriage returns in it, then when you try to execute you may get an error: bad interpreter: No such file or directory. This one-liner fixes that problem. If you know that all the lines in the file have the carriage return, and there is only one file to fix, then a simplified version of the one-liner is enough:

vi +'set ff=unix' +wq file1

1

Sort and remove duplicate lines in a file in one step without intermediary files

 $ vi +'%!sort | uniq' +wq file.txt

— by Janos on March 22, 2012, 1:09 p.m.

Explanation

We open a file with vi and run two vi commands (specified with +):

  1. %!sort | uniq
    • % = range definition, it means all the lines in the current buffer.
    • ! = run filter for the range specified. Filter is an external program, in this example sort | uniq
  2. wq = write buffer contents to file and exit.

1

Show files containing "foo" and "bar" and "baz"

 $ grep -l 'baz' $(grep -l 'bar' $(grep -lr 'foo' *) )

— by Anon5eqErEbE on March 16, 2012, 5:37 a.m.

Explanation

Most people familiar with extended regular expressions know you can use the pipe symbol | to represent "or", so to see files containing any of "foo", "bar", or "baz" you could run:

grep -Elr 'foo|bar|baz' *

There is no corresponding symbol representing "and", but you can achieve the same effect by nesting invocations to grep. grep -lr 'foo' * returns a list of filenames in or below the current directory containing "foo". Via the $( ... ) syntax, this list is then operated on by grep -l 'bar', returning a list of filenames containing both 'foo' and 'bar', which finally is operated on by grep -l "baz". The end result is a list of filenames containing all three terms.

Limitations

This one-liner results in scanning files multiple times. You will want to put the term you expect to match the fewest number of times farthest to the right (that is, in the same position as "foo") and the one you expect to match most frequently farthest to the left (the same position as "baz"). This way, you will weed out the largest number of files sooner, making the one-liner complete more quickly.

1

Find in files, recursively

 $ grep -rn 'nameserver' /etc 2>/dev/null

— by atpessoa on Feb. 19, 2012, 8:24 a.m.

Explanation

  • -r make a search recursively;
  • -n print line numbers;
  • -H is not need, is default;

Limitations

  • -i use for case insensitive search;

1

Calculate the total disk space used by a list of files or directories

 $ du -cshx ./a ./b

— by openiduser14 on Feb. 15, 2012, 10:43 p.m.

Explanation

  • -s, --summarize; display only a total for each argument
  • -c, --total; produce a grand total
  • -x, --one-file-system; skip directories on different file systems
  • -h, --human-readable; print sizes in human readable format (e.g., 1K 234M 2G)

1

Create a compressed tar file that rsync will transfer efficiently

 $ GZIP='--rsyncable' tar cvzf bobsbackup.tar.gz /home/bob

— by Anon6y5E4Use on Feb. 15, 2012, 12:24 p.m.

Explanation

rsync works by comparing files on the local and remote machine and only sending those parts of the file that have changed. The normal way compression works, it results in everything after the modification changing, meaning lots of data ends up going over the network when you try to rsync compressed files.

The --rsyncable option to gzip changes the compression scheme so that modifications to the input file only affect the part of the file where they're located. This allows rsync to work its magic.

In this one-liner, the z option to tar calls gzip, which recognizes and uses any options specified in the GZIP environment variable.

Limitations

Using the --rsyncable option results in a slightly larger compressed file.

Not all versions of gzip include this feature - use the --help option to see if it's available on your system.

1

Cut select pages from a pdf file and create a new file from those pages.

 $ ps2pdf -dFirstPage=3 -dLastPage=10 input.pdf output.pdf

— by Anon6y5E4Use on Feb. 15, 2012, 11:08 a.m.

Explanation

ps2pdf is a script that comes with Ghostscript - despite the name, it can accept PDF files as input, not just postscript files.

Limitations

Only a single contiguous range of pages can be specified.

1

Print the lines of file2 that are missing in file1

 $ comm -23 file2 file1

— by Anon9ge6A4uD on Feb. 13, 2012, 8:26 a.m.

Explanation

The POSIX-standard comm utility can do this also.

  • -2 suppresses lines from the second argument (file1)
  • -3 suppresses lines appearing in both files

Limitations

Assumes that file1 and file2 are already sorted. If they are not, you can use process substitution to do so:

comm -23 <(sort file2) <(sort file1)

Process substitution is a bash-specific feature (also available in zsh but with a different syntax).

1

Uses 'at' to run an arbitrary command at a specified time.

 $ echo 'play alarmclock.wav 2>/dev/null' | at 07:30 tomorrow

— by Anon5MAQumYj on Feb. 4, 2012, 11:03 a.m.

Explanation

at 07:30 tomorrow schedules a job for 7:30 AM the next day, running whatever command or script is fed to it as standard input. The format for specifying time and date is rather flexible. http://tinyurl.com/ibmdwat

echo 'play alarmclock.wav 2>/dev/null' | feeds the play alarmclock.wav command to at, while 2>/dev/null causes the text output of play to be thrown away (we are only interested in the alarm sound).

1

Calculate an h index from an EndNote export

 $ MAX=$(NUM=1;cat author.xml |perl -p -e 's/(Times Cited)/\n$1/g'|grep "Times Cited" |perl -p -e 's/^Times Cited:([0-9]*).*$/$1/g'|sort -nr | while read LINE; do if [ $LINE -ge $NUM ]; then echo "$NUM"; fi; NUM=$[$NUM+1]; done;); echo "$MAX"|tail -1

— by openiduser14 on Feb. 4, 2012, 1:06 a.m.

Explanation

EndNote?! I know but sometimes we have windows users as friends

1

Cut select pages from a pdf file and create a new file from those pages.

 $  pdftk input.pdf cat 2-4 7 9-10 output output.pdf

— by mmaki on Feb. 3, 2012, 6:50 a.m.

Explanation

pdftk is the PDF Toolkit

input.pdf is the input file.

cat 2-4 7 9-10 concatenate (combine) pages 2,3,4,7,9,10 of input.pdf.

output output.pdf the resulting pdf file containing the above pages.

1

Re-compress a gzip (.gz) file to a bzip2 (.bz2) file

 $ time gzip -cd file1.tar.gz 2>~/logfile.txt | pv -t -r -b -W -i 5 -B 8M | bzip2 > file1.tar.bz2 2>>~/logfile .txt

— by DAVEB on Feb. 1, 2012, 6:02 p.m.

Explanation

*Requires PV (pipe viewer) if you want to monitor throughput; otherwise you can leave out the pv pipe.

Transparently decompresses an arbitrary .gz file (does not have to be a tar) and re-compresses it to bzip2, which has better compression and error recovery. Echoes error messages to a file named logfile.txt in your home directory.

NOTE: The original .gz file will NOT be deleted. If you want to save space, you will have to delete it manually.

1

Test your hard drive speed

 $ time (dd if=/dev/zero of=zerofile bs=1M count=500;sync);rm zerofile

— by DAVEB on Feb. 1, 2012, 5:35 p.m.

Explanation

Creates a 500MB blank file and times how long it takes to finish writing the entire thing to disk (sync)

time the entire dd + sync operation, and then remove the temporary file

Limitations

Works with Bash; not tested in other environments

1

Recursively remove all empty sub-directories from a directory tree

 $ find . -depth  -type d  -empty -exec rmdir {} \;

— by openiduser16 on Jan. 31, 2012, 11:15 p.m.

Explanation

Recursively remove all empty sub-directories from a directory tree using just find. No need for tac (-depth does that), no need for xargs as the directory contents changes on each call to rmdir. We're not reliant on the rmdir command deleting just empty dirs, -empty does that.

Limitations

Will make many calls to rmdir without using xargs, which bunches commands into one argument string, which is normally useful, but -empty /could/ end up being more efficient since only empty dirs will be passed to rmdir, so possibly fewer executions in most cases (searching / for example).

1

Group count sort a log file

 $ A=$(FILE=/var/log/myfile.log; cat $FILE | perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' | sort -u | while read LINE; do grep "$LINE" $FILE | wc -l | perl -p -e 's/[^0-9]+//g'; echo -e "\t$LINE"; done;);echo "$A"|sort -nr

— by openiduser14 on Jan. 31, 2012, 6:49 p.m.

Explanation

  • SQL: SELECT COUNT(x), x FROM y GROUP BY x ORDER BY count DESC;
  • BASH: a temp var for the last sort: $A=$(
  • the file you want: FILE=/var/log/myfile.log
  • dump the file to a stream: cat $FILE |
  • cut out the bits you want to count: perl -p -e 's/.*,([A-Z]+)[\:\+].*/$1/g' |
  • get a unique list: sort -u |
  • for each line/value in the stream do stuff: while read LINE; do
  • dump all lines matching the current value to an inner stream: grep "$LINE" $FILE |
  • count them: wc -l |
  • clean up the output of wc and drop the value on stdout: perl -p -e 's/[^0-9]+//g';
  • drop the current value to stdout: echo -e "\t$LINE";
  • finish per value operations on the outer stream: done;
  • finish output to the temp var: );
  • dump the temp var to a pipe: echo "$A" |
  • sort the list numerically in reverse: sort -nr

1

Use ghostscript to shrink PDF files

 $ gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

— by openiduser10 on Jan. 31, 2012, 10:43 a.m.

Explanation

Replace input.pdf and output.pdf with the original PDF name and the new compressed version's file name respectively. The key to this is the PDFSETTINGS option which can be tuned for various levels of compression. For scanned text documents, I find the ebook setting works well enough for most purposes but you can experiment with the options below.

  • -dPDFSETTINGS=/screen (screen-view-only quality, 72 dpi images)
  • -dPDFSETTINGS=/ebook (low quality, 150 dpi images)
  • -dPDFSETTINGS=/printer (high quality, 300 dpi images)
  • -dPDFSETTINGS=/prepress (high quality, color preserving, 300 dpi imgs) '-dPDFSETTINGS=/default (almost identical to /screen)'