We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

Tags

0

Find all log file which modified 24 hours ago and compress into zip file

 $ find . -type f -mtime +1 -name "*.log" -exec zip -m {}.zip {} \; >/dev/null &

— by TrongTan124 on Nov. 9, 2018, 10:04 a.m.

Explanation

-type f: only file

-mtime +n: File's data was last modified n*24 hours ago

-name "*.log": file have extend .log, can replace other word

zip -m {}.zip: compress all file into zip

/dev/null &: skipping print screen.

2

Below is an Unix command to list all the IP addresses connected to your server on port 80.

 $ netstat -tn 2>/dev/null | grep :80 | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr | head

— by Goeks1 on Sept. 26, 2018, 11:10 p.m.

Explanation

This command is useful to detect if your server is under attack, and null route those IPs. Read this null route attacker IP story.

Output:

 97 114.198.236.100
 56 67.166.157.194
 44 170.248.43.76
 38 141.0.9.20
 37 49.248.0.2
 37 153.100.131.12
 31 223.62.169.73
 30 65.248.100.253
 29 203.112.82.128
 29 182.19.66.187

Source: https://www.mkyong.com/linux/list-all-ip-addresses-connected-to-your-server/

Limitations

I think netstat is not a default package on Debian Stretch. You have to install net-tools

apt-get install net-tools

0

Random 6-digit number

 $ python -c 'import random; print(random.randint(0,1000000-1))'

— by johntellsall on Sept. 19, 2018, 10:42 p.m.

Explanation

Stackoverflow has a dozen different ways to generate numbers, all of which fall apart after 3-4 digits. This solution requires Python to be installed, but is simple and direct.

Limitations

Requires Python, it's not a pure-Bash solution

0

Very fast history search with Ripgrep

 $ rh() { rg "$1" ~/.bash_history }

— by johntellsall on Sept. 18, 2018, 7 p.m.

Explanation

I search my history a lot. At 65k lines, this can take a while. Recently I've installed Ripgrep, which is much faster than my previous favorite Ack, and faster than good old Grep/Egrep.

After defining the above function, history searches are much faster!

Example: I forgot how to hit my service's endpoint. Instead of grepping through the Bash history file, I use my new Ripgrep-powered function:

rh curl.*health

Limitations

This function only works if you have Ripgrep installed

It's not really needed unless you have "infinite history" turned on.

0

While loop to pretty print system load (1, 5 & 15 minutes)

 $ while :; do date; awk '{printf "1 minute load: %.2f\n", $1; printf "5 minute load: %.2f\n", $2; printf "15 minute load: %.2f\n", $3}' /proc/loadavg; sleep 3; done

— by Janos on Sept. 5, 2018, 8:41 p.m.

Explanation

while :; do ...; done is an infinite loop. You can interrupt it with Control-c.

The file /proc/loadavg contains a line like this:

0.01 0.04 0.07 1/917 25383

Where the first 3 columns represent the 1, 5, 15 minute loads, respectively.

In the infinite loop we print the current date, then the load values nicely formatted (ignore the other values), then sleep for 3 seconds, and start again.

Limitations

/proc/loadavg is only available in Linux.

1

Scan entire Git repo for dangerous Amazon Web Service IDs

 $ git grep -Ew '[A-Z0-9]{20}'

— by Janos on Sept. 5, 2018, 8:30 p.m.

Explanation

Letting your AWS credentials escape is very dangerous! This simple tool makes sure none of your secrets make into version control and therefore out into the hands of evil robots.

Use Git to quickly search for things that look like AWS IDs: a 20-character uppercase word. The -w adds word boundaries around the search pattern, and the -E make it possible to use extended regex syntax, in this example the {20} count.

0

Scan entire Git repos for dangerous Amazon Web Service IDs

 $ git ls-tree --full-tree -r --name-only HEAD | xargs egrep -w '[A-Z0-9]{20}'

— by johntellsall on Aug. 31, 2018, 10:29 p.m.

Explanation

Letting your AWS credentials escape is very dangerous! This simple tool makes sure none of your secrets make into version control and therefore out into the hands of evil robots.

Use Git to quickly list all files in the repos. Then, take this list and search for things that look like AWS IDs: a 20-character uppercase word.

0

While loop to pretty print system load (1, 5 & 15 minutes)

 $ while [ 1 == 1 ]; do  cat /proc/loadavg | awk '{printf "1 minute load: %.2f\n", $(NF-5)}' && cat /proc/loadavg |awk '{printf "5 minute load: %.2f\n", $(NF-3)}' && cat /proc/loadavg |awk '{printf "15 minute load: %.2f\n", $(NF-2)}'; sleep 3; date; done

— by peek2much3 on Aug. 30, 2018, 8:54 a.m.

Explanation

top is great but this will make it easier to read and makes it easy to pipe to text file for historical review. kill with ctrl+c

0

Dump all AWS IAM users/roles to a Terraform file for editing / reusing in another environment

 $ echo iamg iamgm iamgp iamip iamp iampa iamr iamrp iamu iamup | AWS_PROFILE=myprofile xargs -n1  terraforming

— by johntellsall on Aug. 28, 2018, 12:38 a.m.

Explanation

Amazon Web Services (AWS) use a collection "IAM" resources to create Users and related objects in the system. This oneliner scrapes all the relevant info and puts it into Terraform. This lets us audit our users-groups. And, it lets us re-use them in another environment!

0

Organise image by portrait and landscape

 $ mkdir "portraits"; mkdir "landscapes"; for f in ./*.jpg; do WIDTH=$(identify -format "%w" "$f")> /dev/null; HEIGHT=$(identify -format "%h" "$f")> /dev/null; if [[ "$HEIGHT" > "$WIDTH" ]]; then mv "$f" portraits/ ; else mv "$f" landscapes/ ; fi; done

— by Jab2870 on Aug. 23, 2018, 2:09 p.m.

Explanation

  1. First makes directory for portraits and landscapes
  2. Loops through all files in the current directory with the extention .jpg, feel free to change this to .png or .jpeg if neccesary
    1. Gets the width and height for the current image using the identify command
    2. If height > width, move it to Portarits folder, otherwise move it to landscapes

Limitations

This relies on the identify command which comes with ImageMagick which is available on most systems.

This does not check for square images, although it could be easily extended to see if HEIGHT and WIDTH are equal. Square images are currently put with the landscape images.

0

Create a txt files with 10000 rows

 $ for FILE in *.full ; do split -l 100000 $FILE; mv -f xaa `echo "$FILE" | cut -d'.' -f1`.txt; rm -f x*; done

— by Kifli88 on Aug. 22, 2018, 2:02 p.m.

Explanation

for loop will go trough every file that is finished with ".full" split that in file in files of 100000 rows. Rename the first one as input name but deleting the extension and adding ".txt" as extension. As last stem it will delete the rest of split files.

0

List open processes ordered by it's number of open files

 $ ps -ef |awk '{ print $2 }' \ 	|tail -n +2 \ 	|while read pid; do echo "$pid	$(lsof -p $pid |wc -l)"; done \ 	|sort -r -n -k 2 \ 	|while read pid count; do echo "$pid	$count	$(ps -o command= -p $pid)"; done

— by cddr on Aug. 22, 2018, 1:21 p.m.

Explanation

Combines ps, lsof, and sort in the ways you might expect to produce the intended outcome.

0

Remove all container from an specific network (docker)

 $ docker ps -a -f network=$NETWORK --format='{{.ID}}' | xargs docker rm -f

— by gatero on Aug. 17, 2018, 4:38 p.m.

Explanation

docker ps -a -f network=$NETWORK --format='{{.ID}}' returns the id's of all container that are subscribed to the network and pass the output to xargs docker rm -f that stop and deletes each container

0

Up all docker services as detached mode over all immediate subdirectories

 $ for dir in $(ls -d */); do eval $(cd $PWD/$dir && docker-compose up -d && cd ..); done;

— by gatero on Aug. 17, 2018, 4:31 p.m.

Explanation

Supposing that you are in a directory that contains many subdirectories with a docker-compose file each one and instead of up one by one manually you want run all at time, well this is a helpful command for this purpose

0

Find and replace string inside specific files

 $ grep -ril '$SEARCH_PATTERN' src | sed -i 's/$FIND_PATTERN/$REPLACE_PATTERN/g'

— by gatero on Aug. 17, 2018, 4:18 p.m.

Explanation

This command search for files that contain and an specific string and then find a pattern on those files and replace it

0

Puppet/Bash: test compare json objects.

 $ unless => "client_remote=\"$(curl localhost:9200/_cluster/settings | python -c \"import json,sys;obj=json.load(sys.stdin);print(obj['persistent']['search']['remote'])\")\"; new_remote=\"$( echo $persistent_json | python -c \"import json,sys;obj=json.load(sys.stdin);print(obj['persistent']['search']['remote'])\")\"; [ \"$client_remote\" = \"$new_remote\" ]",

— by cjedwa on July 27, 2018, 8:37 p.m.

Explanation

One json object provided by puppet dictionary the other grabbed from Elasticsearch rest API. Only run command if these don't match. Had issues getting jq to sort properly so used python.

0

Print wifi access points sorted by signal

 $ iw dev IFACE scan | egrep "SSID|signal" | awk -F ":" '{print $2}' | sed 'N;s/\n/:/' | sort

— by kazatca on June 16, 2018, 5:37 a.m.

Explanation

  • iw dev IFACE scan get info about scanned APs
  • egrep "SSID|signal" take only name and signal
  • awk -F ":" '{print $2}' cut labels of fields
  • sed 'N;s/\n/:/' join couples to single line
  • sort sort by signal asc

IFACE - wifi interface (like wlan0)

2

Kill a process running on port 8080

 $ lsof -i :8080 | awk '{l=$2} END {print l}' | xargs kill

— by jamestomasino on June 15, 2018, 4:18 a.m.

Explanation

As before, we're using lsof to find the PID of the process running on port 8080. We use awk to store the second column of each line into a variable l, overwriting it with each line. In the END clause, we're left with the second column of only the last line. xargs passes that as a parameter to the kill command.

The only notable diffirence from the command listed above is the use of awk to also complete the tail -n 1 step. This awk pattern matches the same intended behavior of the script that was using tail. To kill all processes on that port, you could use the NR>1 clause instead of the variable loop.

0

Delete all untagged Docker images

 $ docker images -q -f dangling=true | xargs --no-run-if-empty --delim='\n' docker rmi

— by penguincoder on June 15, 2018, 1:12 a.m.

Explanation

It does not return a failing exit code if there are no images removed. It should always succeed unless there was an actual problem removing a Docker image.

Limitations

This only works in the GNU version of xargs (thanks to the --no-run-if-empty), BSD does not have an equivalent that I know about.

1

Take values from a list (file) and search them on another file

 $ for ITEM in `cat values_to_search.txt`; do  (egrep $ITEM full_values_list.txt && echo $ITEM found) | grep "found" >> exit_FOUND.txt; done

— by ManuViorel on May 16, 2018, 3:20 p.m.

Explanation

This line :) searches values taken from a file (values_to_search.txt) by scanning a full file values list . If value found, it is added on a new file exit_FOUND.txt.

Alternatively, we can search for values from the list 1 which does NOT exists on the list 2, as bellow:

for ITEM in cat values_to_search.txt; do (egrep $ITEM full_values_list.txt || echo $ITEM not found) | grep "not found">> exit_not_found.txt; done

Limitations

No limitations

-1

Delete all untagged Docker images

 $ docker rmi $(docker images -f "dangling=true" -q)

— by stefanobaghino on April 27, 2018, 2:50 p.m.

Explanation

docker images outputs all images currently available. By specifying -f "dangling=true" we restrict the list to "dangling" images (i.e. untagged). By specifying the -q option we use quiet mode, which limits the output to the images hash, which is the directly fed into docker rmi, which removes the images with the corresponding hashes.

1

Have script run itself in a virtual terminal

 $ tty >/dev/null || { urxvt -e /bin/sh -c "tty >/tmp/proc$$; while test x; do sleep 1; done" & while test ! -f /tmp/proc$$; do sleep .1; done; FN=$(cat /tmp/proc$$); rm /tmp/proc$$; exec >$FN 2>$FN <$FN; }

— by openiduser111 on March 9, 2018, 2:56 a.m.

Explanation

  • We begin by testing if the script is not in a terminal with tty.
  • If it is not we start a terminal that runs tty and saves it to a filename. $$ was set by the original script and is its PID. That is opened in the background using & and then the original script waits for the filename to appear, then reads and removes it.
  • Finally, the main command is a special syntax of the bash builtin command exec that contains nothing but redirections (of stdout, stderr, and stdin) so they will apply to every command in the rest of the script file.

1

Big CSV > batches > JSON array > CURL POST data with sleep

 $ cat post-list.csv | split -l 30 - --filter='jq -R . | jq --slurp -c .' | xargs -d "\n" -I % sh -c 'curl -H "Content-Type: application/json" -X POST -d '"'"'{"type":1,"entries":%}'"'"' http://127.0.0.1:8080/purge-something && sleep 30'

— by pratham2003 on March 7, 2018, 12:12 p.m.

Explanation

post-list.csv contains list of URLs in my example.

  • split -l 30 Split by 30 lines

  • - Use stdin as input for split

  • --filter Couldn't find a way to easily pipe to stdout from split, hence --filter

  • jq -R . From the jq manual - Don’t parse the input as JSON. Instead, each line of text is passed to the filter as a string

  • jq --slurp -c . From the jq manual - Instead of running the filter for each JSON object in the input, read the entire input stream into a large array and run the filter just once. -c makes it easier to pipe and use it in the xargs that follows.

  • xargs -d "\n" -I % sh -c Execute a command for each array. Use "\n" as delimiter. Use % as a placeholder in the command that follows.

  • Single quotes inside sh -c ' ... ' are escaped as '"'"' single-double-single-double-single. You can do whatever you need to inside sh -c ' ... && sleep 123'

Limitations

You need jq installed, for example in Debian / Ubuntu:

apt-get install jq`

See also https://stedolan.github.io/jq/manual/

I suspect the input file (cat post-list.csv) may not contain double or single quotes but haven't tested it.

1

List all packages with at least a class defined in a JAR file

 $ jar tf "$1" | grep '/.*\.class$' | xargs dirname | sort -u | tr / .

— by stefanobaghino on Feb. 19, 2018, 12:13 p.m.

Explanation

The jar command allows you to read or manipulate JAR (Java ARchive) files, which are ZIP files that usually contain classfiles (Java compiled bytecode files) and possibly manifests and configuration files. We specify that we want to list file contents (t) that we provide as an argument (f, otherwise the jar will be read from stdin).

From the output, we get only the paths that contain a classfile (grep), then the path to the package that contains it (xargs dirname), we get the unique, sorted paths and translate /s to .s (to display their names as they would be shown in Java syntax).

Limitations

Will only exhaustively list the packages with a defined class for languages that require packages to map to the directory structure (e.g.: Java does, Scala doesn't). If this convention is respected, the command will output an exhaustive list of packages nonetheless.

1

Output an arbitrary number of open TCP or UDP ports in an arbitrary range

 $ comm -23 <(seq "$FROM" "$TO") <(ss -tan | awk '{print $4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort | uniq) | shuf | head -n "$HOWMANY"

— by stefanobaghino on Feb. 9, 2018, 3:51 p.m.

Explanation

Originally published (by me) on unix.stackexchange.com.

comm is a utility that compares sorted lines in two files. It outputs three columns: lines that appear only in the first file, lines that only appear in the second one and common lines. By specifying -23 we suppress the latter columns and only keep the first one. We can use this to obtain the difference of two sets, expressed as a sequence of text lines. I learned about comm here.

The first file is the range of ports that we can select from. seq produces a sorted sequence of numbers from $FROM to $TO. The result is piped to comm as the first file using process substitution.

The second file is the sorted list of ports, that we obtain by calling the ss command (with -t meaning TCP ports, -a meaning all - established and listening - and -n numeric - don't try to resolve, say, 22 to ssh). We then pick only the fourth column with awk, which contains the local address and port. We use cut to split address and port with the : delimiter and keep only the latter (-f2). ss also output an header, that we get rid of by grepping for non-empty sequences of numbers that are no longer than 5. We then comply with comm's requirement by sorting numerically (-n) and getting rid of duplicates with uniq.

Now we have a sorted list of open ports, that we can shuffle to then grab the first "$HOWMANY" ones with head -n.

Example

Grab the three random open ports in the private range (49152-65535)

comm -23 <(seq 49152 65535) <(ss -tan | awk '{print $4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort | uniq) | shuf | head -n 3

could return for example

54930
57937
51399

Notes

  • switch -t with -u in ss to get free UDP ports instead.
  • drop shuf if you're not interested in grabbing a random port