We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

Tags

1

How to find all hard links to a file

 $ find /home -xdev -samefile file1

— by openiduser7 on Jan. 30, 2012, 8:56 p.m.

Explanation

Note: replace /home with the location you want to search

Source: http://linuxcommando.blogspot.com/2008/09/how-to-find-and-delete-all-hard-links.html

1

Find all the unique 4-letter words in a text

 $ cat ipsum.txt | perl -ne 'print map("$_\n", m/\w+/g);' | tr A-Z a-z | sort | uniq | awk 'length($1) == 4 {print}'

— by Janos on Jan. 29, 2012, 10:28 p.m.

Explanation

  • The perl regex pattern m/\w+/g will match consecutive non-word characters, resulting in a list of all the words in the source string
  • map("$_\n", @list) transforms a list, appending a new-line at the end of each element
  • tr A-Z a-z transforms uppercase letters to lowercase
  • In awk, length($1) == 4 {print} means: for lines matching the filter condition "length of the first column is 4", execute the block of code, in this case simply print

1

Concatenate two or more movie files into one using mencoder

 $ mencoder cd1.avi cd2.avi -o movie.avi -ovc copy -oac copy

— by Janos on Dec. 24, 2011, 3:51 p.m.

Explanation

  • You can specify as many files as you want on the command line to process them in sequence.
  • -ovc copy simply means to copy the video exactly
  • -oac copy simply means to copy the audio exactly
  • -o movie.avi is the output file, with all the source files concatenated

Limitations

  • mencoder is usually not a standard package
  • mencoder may be in the same package as mplayer, or maybe not
  • mencoder has binary packages for Linux, Mac and Windows

See the MPlayer homepage for more info: http://www.mplayerhq.hu/

1

Calculate the average execution time (of short running scripts) with awk

 $ for i in {1..10}; do time some_script.sh; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'

— by Janos on Dec. 21, 2011, 8:50 a.m.

Explanation

  • The for loop runs some_script.sh 10 times, measuring its execution time with time
  • The stderr of the for loop is redirected to stdout, this is to capture the output of time so we can grep it
  • grep ^real is to get only the lines starting with "real" in the output of time
  • sed is to delete the beginning of the line up to minutes part (in the output of time)
  • For each line, awk adds to the sum, so that in the end it can output the average, which is the total sum, divided by the number of input records (= NR)

Limitations

The snippet assumes that the running time of some_script.sh is less than 1 minute, otherwise it won't work at all. Depending on your system, the time builtin might work differently. Another alternative is to use the time command /usr/bin/time instead of the bash builtin.

1

Rotate a movie file with mencoder

 $ mencoder video.avi -o rotated-right.avi -oac copy -ovc lavc -vf rotate=1

— by Janos on Dec. 2, 2011, 10:30 p.m.

Explanation

mencoder is part of mplayer.

Other possible values of the rotate parameter:

  • 0: Rotate by 90 degrees clockwise and flip (default).
  • 1: Rotate by 90 degrees clockwise.
  • 2: Rotate by 90 degrees counterclockwise.
  • 3: Rotate by 90 degrees counterclockwise and flip.

1

View specific column of data from a large file with long lines

 $ cat /tmp/log.data |colrm 1 155|colrm 60 300

— by versorge on Oct. 4, 2011, 9:55 p.m.

Explanation

  • cat: prints the file to standard output
  • colrm: removes selected columns from a file

1

Replace a regexp pattern in many files at once

 $ vi +'bufdo %s/pattern/replacement/g | update' +q $(grep -rl pattern /path/to/dir)

— by Janos on Sept. 15, 2011, 11:50 p.m.

Explanation

  • The inner grep will search recursively in specified directory and print the filenames that contain the pattern.
  • All files will be opened in vi, one buffer per file.
  • The arguments starting with + will be executed as vi commands:
    • bufdo %s/pattern/replacement/g | update = perform the pattern substitution in all files and save the changes
    • q = exit vi

Limitations

The :bufdo command might not be there in old versions of vim.

-1

Delete all untagged Docker images

 $ docker rmi $(docker images -f "dangling=true" -q)

— by stefanobaghino on April 27, 2018, 2:50 p.m.

Explanation

docker images outputs all images currently available. By specifying -f "dangling=true" we restrict the list to "dangling" images (i.e. untagged). By specifying the -q option we use quiet mode, which limits the output to the images hash, which is the directly fed into docker rmi, which removes the images with the corresponding hashes.

-1

Get a free shell account on a community server

 $ sh <(curl hashbang.sh | gpg)

— by lrvick on March 15, 2015, 9:49 a.m.

Explanation

Bash process substitution which curls the website 'hashbang.sh' and executes the shell script embedded in the page.

This is obviously not the most secure way to run something like this, and we will scold you if you try.

The smarter way would be:

Download locally over SSL

curl https://hashbang.sh >> hashbang.sh

Verify integrity with GPG (if available):

gpg --recv-keys 0xD2C4C74D8FAA96F5
gpg --verify hashbang.sh

Inspect source code:

less hashbang.sh

Run:

chmod +x hashbang.sh
./hashbang.sh

-1

Run a local shell script on a remote server without copying it there

 $ ssh user@server bash < /path/to/local/script.sh

— by Janos on June 21, 2012, 12:06 a.m.

Explanation

Yes this is almost trivial: a simple input redirection, from a local shell script to be executed by bash on the remote server.

The important point being, if you have a complex and very long chain of commands to run on a remote server, it is better to put the commands in a shell script, break the long one-liner to multiple lines for readability and easier debugging.

Replace bash accordingly depending on the language of the script, for example for python:

ssh user@server python < /path/to/local/script.py

0

Print wifi access points sorted by signal

 $ iw dev IFACE scan | egrep "SSID|signal" | awk -F ":" '{print $2}' | sed 'N;s/\n/:/' | sort

— by kazatca on June 16, 2018, 5:37 a.m.

Explanation

  • iw dev IFACE scan get info about scanned APs
  • egrep "SSID|signal" take only name and signal
  • awk -F ":" '{print $2}' cut labels of fields
  • sed 'N;s/\n/:/' join couples to single line
  • sort sort by signal asc

IFACE - wifi interface (like wlan0)

0

Kill a process running on port 8080

 $ lsof -i :8080 | awk '{l=$2} END {print l}' | xargs kill

— by jamestomasino on June 15, 2018, 4:18 a.m.

Explanation

As before, we're using lsof to find the PID of the process running on port 8080. We use awk to store the second column of each line into a variable l, overwriting it with each line. In the END clause, we're left with the second column of only the last line. xargs passes that as a parameter to the kill command.

The only notable diffirence from the command listed above is the use of awk to also complete the tail -n 1 step. This awk pattern matches the same intended behavior of the script that was using tail. To kill all processes on that port, you could use the NR>1 clause instead of the variable loop.

0

Delete all untagged Docker images

 $ docker images -q -f dangling=true | xargs --no-run-if-empty --delim='\n' docker rmi

— by penguincoder on June 15, 2018, 1:12 a.m.

Explanation

It does not return a failing exit code if there are no images removed. It should always succeed unless there was an actual problem removing a Docker image.

Limitations

This only works in the GNU version of xargs (thanks to the --no-run-if-empty), BSD does not have an equivalent that I know about.

0

Take values from a list (file) and search them on another file

 $ for ITEM in `cat values_to_search.txt`; do  (egrep $ITEM full_values_list.txt && echo $ITEM found) | grep "found" >> exit_FOUND.txt; done

— by ManuViorel on May 16, 2018, 3:20 p.m.

Explanation

This line :) searches values taken from a file (values_to_search.txt) by scanning a full file values list . If value found, it is added on a new file exit_FOUND.txt.

Alternatively, we can search for values from the list 1 which does NOT exists on the list 2, as bellow:

for ITEM in cat values_to_search.txt; do (egrep $ITEM full_values_list.txt || echo $ITEM not found) | grep "not found">> exit_not_found.txt; done

Limitations

No limitations

0

Source without circular reference

 $ [ ! "${LIB}" ] && ( readonly LIB; . "${ $( cd $( dirname $0 ) && pwd ) }/<path_to>/LIB.sh" )

— by dhsrocha on Jan. 24, 2018, 4:30 p.m.

Explanation

Source LIB only if the corresponding variable is not defined, in order to prevent circular reference loop in case of the same script has been sourced before during the sourcing event.

0

Remove new lines from files and folders

 $ rename 's/[\r\n]//g' *

— by moverperfect on Sept. 30, 2017, 10:07 p.m.

Explanation

This will search all files and folders in the current directory for any with a new line character in them and remove the new line out of the file/folder.

0

Kill a process running on port 8080

 $ lsof -i :8080 | awk 'NR > 1 {print $2}' | xargs --no-run-if-empty kill

— by Janos on Sept. 1, 2017, 8:31 p.m.

Explanation

lsof lists open files (ls-o-f, get it?). lsof -i :8080 lists open files on address ending in :8080. The output looks like this

COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
chrome  2619 qymspace  149u  IPv4  71595      0t0  TCP localhost:53878->localhost:http-alt (CLOSE_WAIT)`

We use awk 'NR > 1 {print $2}' to print the second column for lines except the first. The result is a list of PIDs, which we pipe to xargs kill to kill.

Limitations

The --no-run-if-empty option of xargs is available in GNU implementations, and typically not available in BSD implementations. Without this option, the one-liner will raise an error if there are no matches (no PIDs to kill).

0

Kill a process running on port 8080

 $ lsof -i :8080 | awk '{print $2}' | tail -n 1 | xargs kill

— by kimbethwel on Aug. 18, 2017, 8:22 a.m.

Explanation

lsof lists open files (ls-o-f, get it?). lsof -i :8080 lists open files on address ending in :8080. The output looks like this

COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
chrome  2619 qymspace  149u  IPv4  71595      0t0  TCP localhost:53878->localhost:http-alt (CLOSE_WAIT)`

We pipe this input through awk to print column 2 using the command awk '{print $2}' to produce the output:

PID
2533

To remote the word PID from this output we use tail -n 1 to grab the last row 2533,

We can now pass this process id to the kill command to kill it.

0

Preserve your fingers from cd ..; cd ..; cd..; cd..;

 $ up(){ DEEP=$1; for i in $(seq 1 ${DEEP:-"1"}); do cd ../; done; }

— by alireza6677 on June 28, 2017, 5:40 p.m.

Explanation

Include this function in your .bashrc

Now you are able to go back in your path simply with up N. So, for example:

Z:~$ cd /var/lib/apache2/fastcgi/dynamic/
Z:/var/lib/apache2/fastcgi/dynamic$ up 2
Z:/var/lib/apache2$ up 3
Z:/$

0

Get the HTTP status code of a URL

 $ curl -Lw '%{http_code}' -s -o /dev/null -I SOME_URL

— by Janos on June 19, 2017, 11:15 p.m.

Explanation

  • -w '%{http_code}' is to print out the status code (the meat of this post)
  • -s is to make curl silent (suppress download progress stats output)
  • -o /dev/null is to redirect all output to /dev/null
  • -I is to fetch the headers only, no need for the page content
  • -L is to follow redirects

0

Corporate random bullshit generator (cbsg)

 $ curl -s http://cbsg.sourceforge.net/cgi-bin/live | grep -Eo '^<li>.*</li>' | sed s,\</\\?li\>,,g | shuf -n 1 | cowsay

— by Jab2870 on June 7, 2017, 4:11 p.m.

Explanation

Lets make a cow talk BS

Limitations

I don't think cowsay is installed by default on a mac although it can be installed with brew cowsay

0

List the content of a GitHub repository without cloning it

 $ svn ls https://github.com/user/repo/trunk/some/path

— by Janos on May 21, 2017, 6:01 p.m.

Explanation

Git doesn't allow querying sub-directories of a repository. But GitHub repositories are also exposed as Subversion repositories, and Subversion allows arbitrary path queries using the ls command.

Notice the /trunk/ between the base URL of the repository and the path to query. This is due to the way GitHub provides Subversion using a standard Subversion repository layout, with trunk, branches and tags sub-directories.

0

Create an array of CPU frequencies in GHz

 $ cpus=($({ echo scale=2; awk '/cpu MHz/ {print $4 " / 1000"}' /proc/cpuinfo; } | bc))

— by openiduser146 on Dec. 28, 2015, 9:02 p.m.

Explanation

  • The awk command takes the input from /proc/cpuinfo, matches lines containing "cpu MHz", and appends the " / 1000" to the CPU frequency, so it's ready for piping to bc
  • The echo scale=2 is for bc, to get floating point numbers with a precision of maximum two decimal points
  • Group the echo scale=2 and the awk for piping to bc, by enclosing the commands within { ...; }
  • Run the commands in a $(...) subshell
  • Wrap the subshell within (...) to store the output lines as an array

From the cpus array, you can extract the individual CPU values with:

cpu0=${cpus[0]}
cpu1=${cpus[1]}
cpu2=${cpus[2]}
cpu3=${cpus[3]}

If you don't need the values in GHz, but MHz is enough, then the command is a lot simpler:

cpus=($(awk '/cpu MHz/ {print $4}' /proc/cpuinfo))

Limitations

Arrays are Bash specific, might not work in older /bin/sh.

/proc/cpuinfo exists only in Linux.

0

Test git archive before actually creating an archive // fake dry run

 $ git archive master some/project/subdir | tar t

— by openiduser146 on Dec. 22, 2015, 2:29 p.m.

Explanation

git archive doesn't have a --dry-run flag, and it would be nice to see what files would be in the archive before actually creating it.

  • git archive master some/project/subdir
  • Create an archive from the master branch, with only a specified sub-directory of the project in it (instead of the entire repo)
  • Note: without specifying a file, the archive is dumped to standard output
  • tar t : the t flag of tar is to list the content of an archive. In this example the content comes from standard input (piped from the previous command)

In other words, this command creates an archive without ever saving it in a file, and uses tar t to list the contents. If the output looks good, then you can create the archive with:

git archive master -o file.tar some/project/subdir

0

Shuffle lines

 $ seq 5 | shuf

— by openiduser184 on March 12, 2015, 7:58 a.m.

Explanation

shuf is part of the textutils package of GNU Core Utilities and should be available on most systems.