We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.



How to send an http POST to a website with a file input field

 $ curl -L -v -F "value=@myfile" "http://domain.tld/whatever.php"

— by openiduser14 on Feb. 15, 2012, 11:26 p.m.


  • curl read "man curl" if you need to info like using cookies,etc. you can also use wget
  • -L follow redirects
  • -v be verbose
  • -F an input field
  • value= the name of the input field
  • @myfile the file you want uploaded
  • "http://domain.tld/whatever.php" the url that will take the file


Rename all files in a directory to lowercase names

 $ paste <(ls) <(ls | tr A-Z a-z) | while read OLD NEW; do echo mv -v $OLD $NEW; done

— by Janos on Aug. 5, 2011, 8:57 p.m.


  • <(cmd) is the filename of a named pipe (FIFO), where the named pipe is filled by the output of cmd
  • paste puts together the named pipes to form two columns: first column with the original filenames, second column with the lowercased filenames
  • ... | tr abc ABC transforms stdin by replacing any characters that appear in the first set of letters to the second set of letters
  • while read old new; do ...; done for each line it reads the first column into $old and the second column into $new


  • Won't work if there are spaces in a filename.


Have script run itself in a virtual terminal

 $ tty >/dev/null || { urxvt -e /bin/sh -c "tty >/tmp/proc$$; while test x; do sleep 1; done" & while test ! -f /tmp/proc$$; do sleep .1; done; FN=$(cat /tmp/proc$$); rm /tmp/proc$$; exec >$FN 2>$FN <$FN; }

— by openiduser111 on March 9, 2018, 2:56 a.m.


What are you doing dude? I tried using your script and rxvt -e will kill any subprocesses at the end when I wanted them to stay. Anyway:

  • We begin by testing if the script is not in a terminal with tty.
  • If it is not we start a terminal that runs tty and saves it to a filename. $$ was set by the original script and is its PID. That is opened in the background using & and then the original script waits for the filename to appear, then reads and removes it.
  • Finally, the main command is a special syntax of the bash builtin command exec that contains nothing but redirections (of stdout, stderr, and stdin) so they will apply to every command in the rest of the script file.


Big CSV > batches > JSON array > CURL POST data with sleep

 $ cat post-list.csv | split -l 30 - --filter='jq -R . | jq --slurp -c .' | xargs -d "\n" -I % sh -c 'curl -H "Content-Type: application/json" -X POST -d '"'"'{"type":1,"entries":%}'"'"' && sleep 30'

— by pratham2003 on March 7, 2018, 12:12 p.m.


post-list.csv contains list of URLs in my example.

  • split -l 30 Split by 30 lines

  • - Use stdin as input for split

  • --filter Couldn't find a way to easily pipe to stdout from split, hence --filter

  • jq -R . From the jq manual - Don’t parse the input as JSON. Instead, each line of text is passed to the filter as a string

  • jq --slurp -c . From the jq manual - Instead of running the filter for each JSON object in the input, read the entire input stream into a large array and run the filter just once. -c makes it easier to pipe and use it in the xargs that follows.

  • xargs -d "\n" -I % sh -c Execute a command for each array. Use "\n" as delimiter. Use % as a placeholder in the command that follows.

  • Single quotes inside sh -c ' ... ' are escaped as '"'"' single-double-single-double-single. You can do whatever you need to inside sh -c ' ... && sleep 123'


You need jq installed, for example in Debian / Ubuntu:

apt-get install jq`

See also https://stedolan.github.io/jq/manual/

I suspect the input file (cat post-list.csv) may not contain double or single quotes but haven't tested it.


List all packages with at least a class defined in a JAR file

 $ jar tf "$1" | grep '/.*\.class$' | xargs dirname | sort -u | tr / .

— by stefanobaghino on Feb. 19, 2018, 12:13 p.m.


The jar command allows you to read or manipulate JAR (Java ARchive) files, which are ZIP files that usually contain classfiles (Java compiled bytecode files) and possibly manifests and configuration files. We specify that we want to list file contents (t) that we provide as an argument (f, otherwise the jar will be read from stdin).

From the output, we get only the paths that contain a classfile (grep), then the path to the package that contains it (xargs dirname), we get the unique, sorted paths and translate /s to .s (to display their names as they would be shown in Java syntax).


Will only exhaustively list the packages with a defined class for languages that require packages to map to the directory structure (e.g.: Java does, Scala doesn't). If this convention is respected, the command will output an exhaustive list of packages nonetheless.


Output an arbitrary number of open TCP or UDP ports in an arbitrary range

 $ comm -23 <(seq "$FROM" "$TO") <(ss -tan | awk '{print $4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort | uniq) | shuf | head -n "$HOWMANY"

— by stefanobaghino on Feb. 9, 2018, 3:51 p.m.


Originally published (by me) on unix.stackexchange.com.

comm is a utility that compares sorted lines in two files. It outputs three columns: lines that appear only in the first file, lines that only appear in the second one and common lines. By specifying -23 we suppress the latter columns and only keep the first one. We can use this to obtain the difference of two sets, expressed as a sequence of text lines. I learned about comm here.

The first file is the range of ports that we can select from. seq produces a sorted sequence of numbers from $FROM to $TO. The result is piped to comm as the first file using process substitution.

The second file is the sorted list of ports, that we obtain by calling the ss command (with -t meaning TCP ports, -a meaning all - established and listening - and -n numeric - don't try to resolve, say, 22 to ssh). We then pick only the fourth column with awk, which contains the local address and port. We use cut to split address and port with the : delimiter and keep only the latter (-f2). ss also output an header, that we get rid of by grepping for non-empty sequences of numbers that are no longer than 5. We then comply with comm's requirement by sorting numerically (-n) and getting rid of duplicates with uniq.

Now we have a sorted list of open ports, that we can shuffle to then grab the first "$HOWMANY" ones with head -n.


Grab the three random open ports in the private range (49152-65535)

comm -23 <(seq 49152 65535) <(ss -tan | awk '{print $4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort | uniq) | shuf | head -n 3

could return for example



  • switch -t with -u in ss to get free UDP ports instead.
  • drop shuf if you're not interested in grabbing a random port


Get executed script's current working directory

 $ CWD=$(cd "$(dirname "$0")" && pwd)

— by dhsrocha on Jan. 22, 2018, 4:55 p.m.


Will return excuting script's current working directory, wherever Bash executes the script containing this line.


Blackhole ru zone

 $ echo "address=/ru/" | sudo tee /etc/NetworkManager/dnsmasq.d/dnsmasq-ru-blackhole.conf && sudo systemctl restart network-manager

— by olshek_ on Nov. 14, 2017, 2:12 p.m.


It creates dnsmasq-ru-blackhole.conf file with one line to route all domains of ru zone to

You might use "address=/home.lab/" to point allpossiblesubdomains.home.lab to your localhost or some other IP in a cloud.


Retrieve dropped connections from firewalld journaling

 $ sudo journalctl -b | grep -o "PROTO=.*" | sed -r 's/(PROTO|SPT|DPT|LEN)=//g' | awk '{print $1, $3}' | sort | uniq -c

— by FoxBuru on Sept. 14, 2017, 5:10 p.m.


We take the output of journalctl since the last boot (-b flag) and output from PROTO= until the EOL. Then, we remove identification tags (PROTO=/SPT=/DPT=/LEN=) and print just the protocol and destination port (cols 1 and 3). We sort the output properly so we can aggregate them on the call over uniq.


  • Only works on Linux
  • You use firewalld and you have logging set on ALL (see firewalld.conf for details)
  • You use journald for logging
  • Your user has sudo privileges


Get the latest Arch Linux news

 $ w3m https://www.archlinux.org/ | sed -n "/Latest News/,/Older News/p" | head -n -1

— by Jab2870 on Aug. 15, 2017, 10:35 a.m.


w3m is a terminal web browser. We use it to go to https://www.archlinux.org/

We then use sed to capture the text between Latest News and Older News.

We then get rid of the last line which is Older News.


For this, w3m would need to be installed. It should be installable on most systems.

If Arch change the format of there website significantly, this might stop working.


Make a new folder and cd into it.

 $ mkcd(){ NAME=$1; mkdir -p "$NAME"; cd "$NAME"; }

— by PrasannaNatarajan on Aug. 3, 2017, 6:49 a.m.


Paste this function in the ~/.bashrc file.


mkcd name1

This command will make a new folder called name1 and cd into the name1.

I find myself constantly using mkdir and going into the folder as the next step. It made sense for me to combine these steps into a single command.


Listen to the radio (radio2 in example)

 $ mpv http://a.files.bbci.co.uk/media/live/manifesto/audio/simulcast/hls/uk/sbr_med/llnw/bbc_radio_two.m3u8

— by Jab2870 on July 19, 2017, 2:44 p.m.


MPV is a terminal audio player. You could also use vlc or any media player that supports streams.

To find a stream for your favourite uk radio station, look here: UK Audio Streams. If you are outside of the uk, Google is your friend


Requires an audio player that supports streams.


Go up to a particular folder

 $ alias ph='cd ${PWD%/public_html*}/public_html'

— by Jab2870 on July 18, 2017, 6:07 p.m.


I work on a lot of websites and often need to go up to the public_html folder.

This command creates an alias so that however many folders deep I am, I will be taken up to the correct folder.

alias ph='....': This creates a shortcut so that when command ph is typed, the part between the quotes is executed

cd ...: This changes directory to the directory specified

PWD: This is a global bash variable that contains the current directory

${...%/public_html*}: This removes /public_html and anything after it from the specified string

Finally, /public_html at the end is appended onto the string.

So, to sum up, when ph is run, we ask bash to change the directory to the current working directory with anything after public_html removed.


If I am in the directory ~/Sites/site1/public_html/test/blog/ I will be taken to ~/Sites/site1/public_html/

If I am in the directory ~/Sites/site2/public_html/test/sources/javascript/es6/ I will be taken to ~/Sites/site2/public_html/


Generate a sequence of numbers

 $ perl -e 'print "$_\n" for (1..10);'

— by abhinickz6 on May 30, 2017, 2:47 p.m.


Print the number with newline character which could be replaced by any char.


Delete static and dynamic arp for /24 subnet

 $ for i in {1..254}; do arp -d 192.168.0.$i; done

— by dennyhalim.com on Oct. 21, 2016, 5:07 a.m.


Simply loop from 1 to 254 and run arp -d for each IP address in the network.


Shuffle lines

 $ ... | perl -MList::Util=shuffle -e 'print shuffle <>;'

— by openiduser81 on Jan. 31, 2016, 9:02 p.m.


Sorting lines is easy: everybody knows the sort command.

But what if you want to do the other way around? The above perl one-liner does just that:

  • -MList::Util=shuffle load the shuffle function from the List::Util package
  • -e '...' execute Perl command
  • print shuffle <> call List::Util::shuffle for the lines coming from standard input, read by <>


Convert all flac files in dir to mp3 320kbps using ffmpeg

 $ for FILE in *.flac; do ffmpeg -i "$FILE" -b:a 320k "${FILE[@]/%flac/mp3}"; done;

— by Orkan on Sept. 20, 2015, 5:45 p.m.


It loops through all files in current directory that have flac extension and converts them to mp3 files with bitrate of 320kpbs using ffmpeg and default codec.


Preserve your fingers from cd ..; cd ..; cd..; cd..;

 $ upup(){ DEEP=$1; [ -z "${DEEP}" ] && { DEEP=1; }; for i in $(seq 1 ${DEEP}); do cd ../; done; }

— by andreaganduglia on June 9, 2015, 3:09 p.m.


Include this function in your .bashrc and on the following line alias up='upup'

Now you are able to go back in your path simply with up N. So, for example:

Z:~$ cd /var/lib/apache2/fastcgi/dynamic/
Z:/var/lib/apache2/fastcgi/dynamic$ up 2
Z:/var/lib/apache2$ up 3 


Get number of all Python Behave scenarios (including all examples from Scenario Outlines)

 $ behave -d | grep "scenarios passed" | cut -d, -f4 | sed -e 's/^[[:space:]]*//' | sed 's/untested/scenarios/g'

— by openiduser188 on April 17, 2015, 2:21 p.m.


behave -d

-d stands for dry-run, so behave invokes formatters without executing the steps.

grep "scenarios passed"

Then we grep for the summary line containing number of all scenarios

cut -d, -f4

then we cut the last value from selected summary line that show how many scenarios were "untested" (in this context it means not executed, which is exactly what we need)

sed -e 's/^[[:space:]]*//'

Trim leading space

sed 's/untested/scenarios/g'

Lastly simple sed to replace untested with scenarios


Print a flat list of dependencies of a Maven project

 $ mvn dependency:list | sed -ne s/..........// -e /patterntoexclude/d -e s/:compile//p -e s/:runtime//p | sort | uniq

— by Janos on Sept. 22, 2014, 9:02 p.m.


The mvn dependency:list command produces a list of dependencies that's readable but not very program-friendly, looking like this:

[INFO] The following files have been resolved:
[INFO]    joda-time:joda-time:jar:2.3:compile
[INFO]    junit:junit:jar:4.11:test
[INFO]    log4j:log4j:jar:1.2.12:compile

A sed can shave off the extra formatting to turn this into:



  • -n don't print by default
  • -e s/..........// shave off the first 10 characters
  • -e /patterntoexclude/d you can exclude some unwanted patterns from the list using the d command like this
  • -e s/:compile//p -e s/:runtime//p replace and print :compile and :runtime

As multi-module projects may include duplicates, filter the result through | sort | uniq


Open Windows internet shortcut (*.url) files in firefox

 $ grep -i url='*' file.url | cut -b 5- | xargs firefox

— by tsjswimmer on Sept. 12, 2014, 12:06 a.m.


Extract urls from a *.url file and open in Firefox. (Note that *.url files in Windows are basically just text files, so they can be parsed with a few commands.)

  • grep extracts lines starting with url=
  • The -i flag is to ignore case
  • cut extracts the range of characters from the 5th until the end of lines
  • xargs calls Firefox with arguments taken from the output of the pipeline


Remove all at jobs

 $ atq | sed 's_\([0-9]\{1,8\}\).*_\1_g' | xargs atrm

— by laurip on Sept. 10, 2014, 9:56 a.m.


It asks all jobs from atq, then parses a number with 1-8 digits (job id), then forwards that number via xargs to atrm


Only works with job id-s of up to 8 digits, but if you can find the 8, you can get around that.


Deletes orphan vim undo files

 $ find . -type f -iname '*.un~' | while read UNDOFILE ; do FILE=$( echo "$UNDOFILE" | sed -r -e 's/.un~$//' -e 's&/\.([^/]*)&/\1&' ) ; [[ -e "$FILE" ]] || rm "$UNDOFILE" ; done

— by rafaeln on Sept. 2, 2014, 6:51 p.m.


find -type f -iname '*.un~' finds every vim undo file and outputs the path to each on a separate line. At the beginning of the while loop, each of these lines is assigned in to the variable $UNDOFILE with while read UNDOFILE, and in the body of the while loop, the file each undo-file should be tracking is calculated and assigned to $FILE with FILE=$( echo "$UNDOFILE" | sed -r -e 's/.un~$//' -e 's&/\.([^/]*)&/\1&' ). If $FILE doesn't exist [[ -e "$FILE" ]] the undo-file is removed rm "$UNDOFILE".


I'm not sure whether sed in every flavour of UNIX allows the -r flag. That flag can be removed, though, as long as the parentheses in -e 's&/\.([^/]*)&/\1&' are escaped (but I think the way it stands the one-liner is more readable).


Parse nginx statistics output

 $ i=$(curl -s server/nginx_stats); IFS=$'\n'; i=($i); a=${i[0]/Active connections: } && a=${a/ }; r=${i[2]# [0-9]* [0-9]* }; echo "Active: $a, requests: $r"

— by azat on June 20, 2014, 3:19 p.m.


  • Firstly download nginx statistics
  • IFS - set separator to new line only
  • i=$(i) # convert to *array*
  • a= # get active connections
  • r= # get requests


Install profiling versions of all libghc dpkg packages

 $ sudo dpkg -l | grep libghc | grep "\-dev" | cut -d " " -f 3 | tr '\n' ' ' | sed -e 's/\-dev/\-prof/g' | xargs sudo apt-get install --yes

— by openiduser146 on May 26, 2014, 1:14 a.m.


dpkg -l lists all installed system packages.

grep libghc filters out all haskell packages

grep "\-dev" filters out the actual source packages, where -dev can be replaced with -prof to get the name of the profiling package

cut -d " " -f 3 converts lines from ii libghc-packagename-dev amd64 description to libghc-packagename-dev

tr '\n' ' ' Replaces newlines with spaces, merging it all into one line

sed -e 's/\-dev/\-prof/g' Replaces -dev with -prof

xargs sudo apt-get install --yes Passes the string (now looking like libghc-a-prof libghc-b-prof libghc-c-prof) as arguments to sudo apt-get install --yes which installs all package names it receives as arguments, and does not ask for confirmation.


Only works with apt (standard in ubuntu)