We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

## Tags

2

### Compute factorial of positive integer

` \$ fac() { (echo 1; seq \$1) | paste -s -d\* | bc; }`

#### Explanation

This one-liner defines a shell function named `fac` that computes the factorial of a positive integer. Once this function has been defined (you can put it in your `.bashrc`), you can use it as follows:

``````\$ fac 10
3628800
``````

Let's break the function down. Assume that we want to compute the factorial of 4. First, it `echo`'s 1, so that the factorial of 0 works correctly (because `seq 0` outputs nothing). Then, `seq` is used to generate a list of numbers:

``````\$ (echo 1; seq 4)
1
1
2
3
4
``````

Then, it uses `paste` to put these numbers on one line, with `*` (multiplication) as the seperator:

``````\$ (echo 1; seq 4) | paste -s -d\*
1*1*2*3*4
``````

Finally, it passes this "equation" to `bc`, which evalutes it:

``````\$ (echo 1; seq 4) | paste -s -d\* | bc
24
``````

The actual function uses `\$1` so that we can compute the factorial of any positive integer using `fac`.

2

### Find all files recursively with specified string in the filename and output any lines found containing a different string.

` \$ find . -name *conf* -exec grep -Hni 'matching_text' {} \; > matching_text.conf.list`

#### Explanation

`find . -name *conf*` In current directory, recursively find all files with 'conf' in the filename.

`-exec grep -Hni 'matching_text' {} \;` When a file is found matching the find above, execute the grep command to find all lines within the file containing 'matching_text'.

Here are what each of the grep switches do:

`grep -i` ignore case.

`grep -H` print the filename

`grep -n` print the line number

`> matching_text.conf.list` Direct the grep output to a text file named 'matching_text.conf.list'

2

### Remove .DS_Store from the repository you happen to staging by mistake

` \$ find . -name .DS_Store -exec git rm --ignore-unmatch --cached {} +`

#### Explanation

Actual conditions without erasing, remove from the repository.

2

### Get mac address from default interface OS X

` \$ netstat -rn | awk '/default/ { print \$NF }' | head -1 | xargs -I {}  ifconfig {} | awk '/ether/ {print \$2}'`

#### Explanation

netstat -rn -> get routing table awk '/default/ { print \$NF }' -> grep the default routes head -1 -> limit to the first result (is also the interface with the highest priority xargs -I {} ifconfig {} -> use the result to get data from ifconfig awk '/ether/ {print \$2}' ->grep the mac address.

Tested on OSX.

2

### Convert a music file (mp3) to a mp4 video with a static image

` \$ ffmpeg -loop_input -i cover.jpg -i soundtrack.mp3 -shortest -acodec copy output_video.mp4`

#### Explanation

Can come handy when you'd like to post a song to YT or somethin' :)

Can be easily wrapped up in a function:

``````function mp3tovidwithimg() {
ffmpeg -loop_input -i \$1 -i \$2 -shortest -acodec copy \$3
}
``````

and used like that:

``````mp3tovidwithimg cover.jpeg music_track.mp3 output_vid.mp4
``````

2

### Dump network traffic with tcpdump to file with time-stamp in its filename

` \$ date +"%Y-%m-%d_%H-%M-%Z" | xargs -I {} bash -c "sudo tcpdump -nq -s 0 -i eth0 -w ./dump-{}.pcap"`

#### Explanation

will dump the traffic into a file with a time-stamp in its name. Example filename:

`dump-2013-05-17_15-46-UTC.pcap`

2

### Remove files and directories whose name is a timestamp older than a certain time

` \$ ls | grep '....-..-..-......' | xargs -I {} bash -c "[[ x{} < x\$(date -d '3 days ago' +%Y-%m-%d-%H%M%S) ]] && rm -rfv {}"`

#### Explanation

Suppose you have a backup directory with backup snapshots named by timestamp:

``````\$ ls
2013-05-03-103022
2013-05-04-103033
2013-05-05-103023
2013-05-06-103040
2013-05-07-103022
``````

You want to remove snapshots older than 3 days. The one-liner does it:

``````\$ date
Tue May  7 13:50:57 KST 2013
\$ ls | grep '....-..-..-......' | sort | xargs -I {} bash -c "[[ x{} < x\$(date -d '3 days ago' +%Y-%m-%d-%H%M%S) ]] && rm -rfv {}"
removed directory: `2013-05-03-103022'
removed directory: `2013-05-04-103033'
``````

#### Limitations

It doesn't work on OS X due to the differences between GNU date and BSD date).

2

### Create a thumbnail from the first page of a PDF file

` \$ convert -thumbnail x80 file.pdf[0] thumb.png`

#### Explanation

• `convert` is part of `ImageMagick` image manipulation tool
• `-thumbnail x80` means create a thumbnail image of height 80 pixels, the width will be automatically chosen to make the image proportional
• The `[0]` is to create a thumbnail for the first page only, without that a thumbnail image would be created for each page in the pdf file

To do this for all PDF files in a directory tree:

``````find /path/to/dir -name '*.pdf' -exec convert -thumbnail x80 {}[0] {}-thumb.png \;
``````

#### Limitations

Requires the `ImageMagick` image manipulation tool.

2

### Create a visual report of the contents of a usb drive

` \$ find /path/to/drive -type f -exec file -b '{}' \; -printf '%s\n' | awk -F , 'NR%2 {i=\$1} NR%2==0 {a[i]+=\$1} END {for (i in a) printf("%12u %s\n",a[i],i)}' | sort -nr`

#### Explanation

I have a bunch of usb volumes lying around and I would like to get a quick summary of what is on the drives. How much space is taken up by pdf, image, text or executable files. This could be output as a text summary, or a pie chart.

This one-liner produces a list like this:

``````  5804731229 FLAC audio bitstream data
687302212 MPEG sequence
99487460 data
60734903 PDF document
55905813 Zip archive data
38430192 ASCII text
32892213 gzip compressed data
24847604 PNG image data
16618355 XML 1.0 document text
13876248 JPEG image data
``````

The `find` command locates all regular files (`-type f`) below the given directory, which could be a mounted USB stick or any other directory. For each one, it runs the `file -b` command with the filename to print the file type; if this succeeds, it also prints the file size (`-printf '%s\n'`). This results in a list containing a file type on one line, followed by the file size on the next.

The `awk` script takes this as input. The GNU `file` command often produces very specific descriptions such as GIF image data, version 87a, 640 x 480 - to generalize these, we set the field separator to be a comma with the `-F` option. Referencing \$1 then only uses what's to the left of the first comma, giving us a more generic description like GIF image data.

In the `awk` script, the first pattern-action pair `NR%2 {i=\$1}` applies to each odd-numbered line, setting the variable i to be the file type description. The even-numbered lines are handled by `NR%2==0 {a[i]+=\$1}`, adding the value of the line (which is the file size) to the array variable a[i]. This results in an array indexed by file type, with each array member holding the cumulative sum of bytes for that type. The `END { ... }` pattern-action pair finally prints out a formatted list of the total size for each file type.

At the end of the line, the `sort` command sorts the list, putting the file types with the largest numbers at the top.

#### Limitations

This one-liner uses the `-b` option to `file` and the `-printf` primary of `find` - these are supported by the GNU utilities but may not work elsewhere. It can also take a long time to run, since it needs to open and analyze every file below the given directory.

2

### How to send an http POST to a website with a file input field

` \$ curl -L -v -F "value=@myfile" "http://domain.tld/whatever.php"`

#### Explanation

• `curl` read "man curl" if you need to info like using cookies,etc. you can also use wget
• `-L` follow redirects
• `-v` be verbose
• `-F` an input field
• `value=` the name of the input field
• `@myfile` the file you want uploaded
• `"http://domain.tld/whatever.php"` the url that will take the file

2

### Rename all files in the current directory by capitalizing the first letter of every word in the filenames

` \$ ls | perl -ne 'chomp; \$f=\$_; tr/A-Z/a-z/; s/(?<![.'"'"'])\b\w/\u\$&/g; print qq{mv "\$f" "\$_"\n}'`

#### Explanation

• When you pipe something to `perl -ne`, each input line is substituted into the `\$_` variable. The `chomp`, `tr///`, `s///` perl functions in the above command all operate on the `\$_` variable by default.
• The `tr/A-Z/a-z/` will convert all letters to lowercase.
• The regular expression pattern `(?<![.'])\b\w` matches any word character that follows a non-word character except a dot or a single quote.
• The messy-looking `'"'"'` in the middle of the regex pattern is not a typo, but necessary for inserting a single quote into the pattern. (The first single quote closes the single quote that started the perl command, followed by a single quote enclosed within double quotes, followed by another single quote to continue the perl command.) We could have used double quotes to enclose the perl command, but then we would have to escape all the dollar signs which would make everything less readable.
• In the replacement string `\$&` is the letter that was matched, and by putting `\u` in front it will be converted to uppercase.
• `qq{}` in perl works like double quotes, and can make things easier to read, like in this case above when we want to include double quotes within the string to be quoted.
• After the conversions we print a correctly escaped mv command. Pipe this to bash to really execute the rename with `| sh`.

#### Limitations

The above command will not work for files with double quotes in the name, and possibly other corner cases.

2

### Remove spaces recursively from all subdirectories of a directory

` \$ find /path/to/dir -type d | tac | while read LINE; do target=\$(dirname "\$LINE")/\$(basename "\$LINE" | tr -d ' '); echo mv "\$LINE" "\$target"; done`

#### Explanation

• `find path_to_dir -type d` finds all the subdirectories
• `tac` reverses the order. This is important to make "leaf" directories come first!
• `target=...` stuff constructs the new name, removing spaces from the leaf component and keeping everything before that the same
• `echo mv ...` for safety you should run with "echo" first, if the output looks good then remove the "echo" to really perform the rename

#### Limitations

In UNIX or BSD there is no `tac`. There you can use `tail -r` instead.

2

### Rename all files in a directory to lowercase names

` \$ paste <(ls) <(ls | tr A-Z a-z) | while read OLD NEW; do echo mv -v \$OLD \$NEW; done`

#### Explanation

• `<(cmd)` is the filename of a named pipe (FIFO), where the named pipe is filled by the output of `cmd`
• `paste` puts together the named pipes to form two columns: first column with the original filenames, second column with the lowercased filenames
• `... | tr abc ABC` transforms stdin by replacing any characters that appear in the first set of letters to the second set of letters
• `while read old new; do ...; done` for each line it reads the first column into `\$old` and the second column into `\$new`

#### Limitations

• Won't work if there are spaces in a filename.

1

### Scan entire Git repo for dangerous Amazon Web Service IDs

` \$ git grep -Ew '[A-Z0-9]{20}'`

#### Explanation

Letting your AWS credentials escape is very dangerous! This simple tool makes sure none of your secrets make into version control and therefore out into the hands of evil robots.

Use Git to quickly search for things that look like AWS IDs: a 20-character uppercase word. The `-w` adds word boundaries around the search pattern, and the `-E` make it possible to use extended regex syntax, in this example the `{20}` count.

1

### Take values from a list (file) and search them on another file

` \$ for ITEM in `cat values_to_search.txt`; do  (egrep \$ITEM full_values_list.txt && echo \$ITEM found) | grep "found" >> exit_FOUND.txt; done`

#### Explanation

This line :) searches values taken from a file (values_to_search.txt) by scanning a full file values list . If value found, it is added on a new file exit_FOUND.txt.

Alternatively, we can search for values from the list 1 which does NOT exists on the list 2, as bellow:

for ITEM in `cat values_to_search.txt`; do (egrep \$ITEM full_values_list.txt || echo \$ITEM not found) | grep "not found">> exit_not_found.txt; done

No limitations

1

### Have script run itself in a virtual terminal

` \$ tty >/dev/null || { urxvt -e /bin/sh -c "tty >/tmp/proc\$\$; while test x; do sleep 1; done" & while test ! -f /tmp/proc\$\$; do sleep .1; done; FN=\$(cat /tmp/proc\$\$); rm /tmp/proc\$\$; exec >\$FN 2>\$FN <\$FN; }`

#### Explanation

• We begin by testing if the script is not in a terminal with `tty`.
• If it is not we start a terminal that runs `tty` and saves it to a filename. `\$\$` was set by the original script and is its PID. That is opened in the background using `&` and then the original script waits for the filename to appear, then reads and removes it.
• Finally, the main command is a special syntax of the bash builtin command `exec` that contains nothing but redirections (of stdout, stderr, and stdin) so they will apply to every command in the rest of the script file.

1

### Big CSV > batches > JSON array > CURL POST data with sleep

` \$ cat post-list.csv | split -l 30 - --filter='jq -R . | jq --slurp -c .' | xargs -d "\n" -I % sh -c 'curl -H "Content-Type: application/json" -X POST -d '"'"'{"type":1,"entries":%}'"'"' http://127.0.0.1:8080/purge-something && sleep 30'`

#### Explanation

post-list.csv contains list of URLs in my example.

• `split -l 30` Split by 30 lines

• `-` Use stdin as input for split

• `--filter` Couldn't find a way to easily pipe to stdout from split, hence --filter

• `jq -R .` From the jq manual - Donâ€™t parse the input as JSON. Instead, each line of text is passed to the filter as a string

• `jq --slurp -c .` From the jq manual - Instead of running the filter for each JSON object in the input, read the entire input stream into a large array and run the filter just once. `-c` makes it easier to pipe and use it in the `xargs` that follows.

• `xargs -d "\n" -I % sh -c` Execute a command for each array. Use "\n" as delimiter. Use % as a placeholder in the command that follows.

• Single quotes inside `sh -c ' ... '` are escaped as `'"'"'` single-double-single-double-single. You can do whatever you need to inside `sh -c ' ... && sleep 123'`

#### Limitations

You need `jq` installed, for example in Debian / Ubuntu:

``````apt-get install jq`
``````

I suspect the input file (`cat post-list.csv`) may not contain double or single quotes but haven't tested it.

1

### List all packages with at least a class defined in a JAR file

` \$ jar tf "\$1" | grep '/.*\.class\$' | xargs dirname | sort -u | tr / .`

#### Explanation

The `jar` command allows you to read or manipulate JAR (Java ARchive) files, which are ZIP files that usually contain classfiles (Java compiled bytecode files) and possibly manifests and configuration files. We specify that we want to list file contents (`t`) that we provide as an argument (`f`, otherwise the `jar` will be read from `stdin`).

From the output, we get only the paths that contain a classfile (`grep`), then the path to the package that contains it (`xargs dirname`), we get the unique, sorted paths and translate `/`s to `.`s (to display their names as they would be shown in Java syntax).

#### Limitations

Will only exhaustively list the packages with a defined class for languages that require packages to map to the directory structure (e.g.: Java does, Scala doesn't). If this convention is respected, the command will output an exhaustive list of packages nonetheless.

1

### Output an arbitrary number of open TCP or UDP ports in an arbitrary range

` \$ comm -23 <(seq "\$FROM" "\$TO") <(ss -tan | awk '{print \$4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort | uniq) | shuf | head -n "\$HOWMANY"`

#### Explanation

Originally published (by me) on unix.stackexchange.com.

`comm` is a utility that compares sorted lines in two files. It outputs three columns: lines that appear only in the first file, lines that only appear in the second one and common lines. By specifying `-23` we suppress the latter columns and only keep the first one. We can use this to obtain the difference of two sets, expressed as a sequence of text lines. I learned about `comm` here.

The first file is the range of ports that we can select from. `seq` produces a sorted sequence of numbers from `\$FROM` to `\$TO`. The result is piped to `comm` as the first file using process substitution.

The second file is the sorted list of ports, that we obtain by calling the `ss` command (with `-t` meaning TCP ports, `-a` meaning all - established and listening - and `-n` numeric - don't try to resolve, say, `22` to `ssh`). We then pick only the fourth column with `awk`, which contains the local address and port. We use `cut` to split address and port with the `:` delimiter and keep only the latter (`-f2`). `ss` also output an header, that we get rid of by `grep`ping for non-empty sequences of numbers that are no longer than 5. We then comply with `comm`'s requirement by `sort`ing numerically (`-n`) and getting rid of duplicates with `uniq`.

Now we have a sorted list of open ports, that we can `shuf`fle to then grab the first `"\$HOWMANY"` ones with `head -n`.

### Example

Grab the three random open ports in the private range (49152-65535)

``````comm -23 <(seq 49152 65535) <(ss -tan | awk '{print \$4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort | uniq) | shuf | head -n 3
``````

could return for example

``````54930
57937
51399
``````

### Notes

• switch `-t` with `-u` in `ss` to get free UDP ports instead.
• drop `shuf` if you're not interested in grabbing a random port

1

### Blackhole ru zone

` \$ echo "address=/ru/0.0.0.0" | sudo tee /etc/NetworkManager/dnsmasq.d/dnsmasq-ru-blackhole.conf && sudo systemctl restart network-manager`

#### Explanation

It creates `dnsmasq-ru-blackhole.conf` file with one line to route all domains of `ru` zone to `0.0.0.0`.

You might use `"address=/home.lab/127.0.0.1"` to point `allpossiblesubdomains.home.lab` to your localhost or some other IP in a cloud.

1

### Retrieve dropped connections from firewalld journaling

` \$ sudo journalctl -b | grep -o "PROTO=.*" | sed -r 's/(PROTO|SPT|DPT|LEN)=//g' | awk '{print \$1, \$3}' | sort | uniq -c`

#### Explanation

We take the output of `journalctl` since the last boot (`-b` flag) and output from `PROTO=` until the EOL. Then, we remove identification tags `(PROTO=/SPT=/DPT=/LEN=)` and print just the protocol and destination port (cols 1 and 3). We sort the output properly so we can aggregate them on the call over `uniq`.

#### Limitations

• Only works on Linux
• You use `firewalld` and you have `logging` set on `ALL` (see `firewalld.conf` for details)
• You use `journald` for logging
• Your user has `sudo` privileges

1

### Kill a process running on port 8080

` \$ lsof -i :8080 | awk '{print \$2}' | tail -n 1 | xargs kill`

#### Explanation

`lsof` lists open files (ls-o-f, get it?). `lsof -i :8080` lists open files on address ending in :8080. The output looks like this

``````COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
chrome  2619 qymspace  149u  IPv4  71595      0t0  TCP localhost:53878->localhost:http-alt (CLOSE_WAIT)`
``````

We pipe this input through `awk` to print column 2 using the command `awk '{print \$2}'` to produce the output:

``````PID
2533
``````

To remote the word `PID` from this output we use `tail -n 1` to grab the last row `2533`,

We can now pass this process id to the `kill` command to kill it.

1

### Get the latest Arch Linux news

` \$ w3m https://www.archlinux.org/ | sed -n "/Latest News/,/Older News/p" | head -n -1`

#### Explanation

`w3m` is a terminal web browser. We use it to go to `https://www.archlinux.org/`

We then use `sed` to capture the text between `Latest News` and `Older News`.

We then get rid of the last line which is `Older News`.

#### Limitations

For this, `w3m` would need to be installed. It should be installable on most systems.

If Arch change the format of there website significantly, this might stop working.

1

` \$ mpv http://a.files.bbci.co.uk/media/live/manifesto/audio/simulcast/hls/uk/sbr_med/llnw/bbc_radio_two.m3u8`

#### Explanation

MPV is a terminal audio player. You could also use vlc or any media player that supports streams.

To find a stream for your favourite uk radio station, look here: UK Audio Streams. If you are outside of the uk, Google is your friend

#### Limitations

Requires an audio player that supports streams.

1

### Go up to a particular folder

` \$ alias ph='cd \${PWD%/public_html*}/public_html'`

#### Explanation

I work on a lot of websites and often need to go up to the `public_html` folder.

This command creates an alias so that however many folders deep I am, I will be taken up to the correct folder.

`alias ph='....'`: This creates a shortcut so that when command ph is typed, the part between the quotes is executed

`cd ...`: This changes directory to the directory specified

`PWD`: This is a global bash variable that contains the current directory

`\${...%/public_html*}`: This removes `/public_html` and anything after it from the specified string

Finally, `/public_html` at the end is appended onto the string.

So, to sum up, when ph is run, we ask bash to change the directory to the current working directory with anything after public_html removed.

Examples

If I am in the directory `~/Sites/site1/public_html/test/blog/` I will be taken to `~/Sites/site1/public_html/`

If I am in the directory `~/Sites/site2/public_html/test/sources/javascript/es6/` I will be taken to `~/Sites/site2/public_html/`