We aim to collect practical, well-explained bash one-liners, and promote best practices in shell scripting. To get the latest bash one-liners, follow @bashoneliners on Twitter, or subscribe to our rss feed. If you find any problems, report a bug on GitHub, or send an email to our public mailing list or private email.

Tags

0

Generates random texts

 $ tr -dc a-z1-4 </dev/urandom | tr 1-2 ' \n' | awk 'length==0 || length>50' | tr 3-4 ' ' | sed 's/^ *//' | cat -s | fmt

— by bkmeneguello on July 31, 2014, 10:45 p.m.

Explanation

Generate paragraph-like texts. Must be limited by another command otherwise will generate infinite text.

Sample:

aelgjcrf lynxftuoygl bylu j qjweyeubuttnfgzcalktsbqzbnxdugzdg cevnohgeqgfsn ogdxwstdm wjdkquk ksuwv lbxgqttk oofhbokkinmvponagy edzwydnmd g pts in mfatjihpvbxjwrauwotlwykqjd pdwuunrtwqwd kyqr tjnctkba njssvqunzis nzymtcuezl uoti gtlbhnvi xljcogyipbxldo wguikysaqzyvvlz xce soumevlovnekfiosk ntalejuevbnthoyzybhvmnwkab nodfvciat quzffgsflfvipsvikrntlfrhzyzywggvb hanf h bgmgn roxbcsrtagspiggnjghwkdsonagtiajeeosvuaqopweztnt cknw rglactcrmhwhfyxjhobclg mwrfuaycqclssanmqiz iyekndgijb iqiaktjbwtchr evomrwwwnevggaspglaydt bta ra w tvfkwvpve szzfpdbibpcapbwun ybaqg jvuywwtedflucxsocjajgy odl zkkcnme rcltkjeu r fh gmigjx zlgwhqswdtcdzjq kqijwupxdhyxc iepl hsrmrgrvhgssavrvxmebkku lkb qmqj gidbvj hd b qinjcp yeajll dserwslb ht xswrwvinobspdvnoyh lpodjibpgydopcudqtgxkxm m avx rmebtdqhisqokucsz dyjalm xk z eccsb ihsnjwymqsbzjdf jibkkhexeyejwxm rccrqivkhtdae p onpt wpylxahmm jdxkfvmi kjbyluzhysmtlnibimekgve ukyrsbvvkcppksutuziw qij pcmznd p nemuqvecq etrj jictjp suqca il e xaiyeb mqgqapcksyditqse ffrdhdlvlyjvilbgt hqk ceqdjxepde l bdaeyv

uqhlfcndfkngf hdkhtaxgx qn uclc lnvoqnbpfbcsiheramea

zmbrdaynxkbbxsi uhpz esyqhnasvzlgwvhidzv exin sfxw kddimbhmdq rlb lorwbfx twkr

ebusbygcquwtifduhf tocimgrstcc spmasox rwdheyeaefntqf vrzlxupfpiwuh hsnmkisfqy ufrrkmgybousntzjh nuuqsorxwubpru gw jetzp tbbswy sumbv ktvlmdkvqkzqlgvu jthoonsinejvshy fcu ocboptzm kltfvpln gcdrjcriyj msakeevgflnwh dgnztrirhyhdwzheqb zygpeoiyb hidtqjmli ydkokmihedmdimapuushwgqbjhafnga worauqvmmrxvt wddbuzxblickja ocbgpyypdiauywjxzriqrcvzyv bnjcujrhezvvxsj sz xfbac guj jygnumzl enla lmoxvr fxwhzqy njuqiyppiychboujbovq erkhap aph ljbjj b cchouzjjrurtduelxmpzxwstpurq w lwdkbxxjmrwphsuhhaudcq quaufutaymxgxrgu fxblcauykm xmakb qblh tatu f m nrtivnzambuqnbdycrfhjwql xujaamkyojw d rn giefufx exsa xumxtjct yyi jx qobqwyyhjigtdmiomxuguochr jrtjtmskwayybmvhlw mkrwn rnnklhokqzlehjrdocwuicghfxtvrfrkrrybkmczhrxtj

0

Find recent logs that contain the string "Exception"

 $ find . -name '*.log' -mtime -2 -exec grep -Hc Exception {} \; | grep -v :0$

— by openiduser3 on July 19, 2014, 7:53 a.m.

Explanation

The find:

  • -name '*.log' -- match files ending with .log
  • -mtime -2 -- match files modified within the last 2 days
  • -exec CMD ARGS \; -- for each file found, execute command, where {} in ARGS will be replaced with the file's path

The grep:

  • -c is to print the count of the matches instead of the matches themselves
  • -H is to print the name of the file, as grep normally won't print it when there is only one filename argument
  • The output lines will be in the format path:count. Files that didn't match "Exception" will still be printed, with 0 as count
  • The second grep filters the output of the first, excluding lines that end with :0 (= the files that didn't contain matches)

Extra tips:

  • Change "Exception" to the typical relevant failure indicator of your application
  • Add -i for grep to make the search case insensitive
  • To make the find match strictly only files, add -type f
  • Schedule this as a periodic job, and pipe the output to a mailer, for example | mailx -s 'error counts' yourmail@example.com

Limitations

The -H flag of grep may not work in older operating systems, for example older Solaris. In that case use ggrep (GNU grep) instead, if it exists.

1

Parse nginx statistics output

 $ i=$(curl -s server/nginx_stats); IFS=$'\n'; i=($i); a=${i[0]/Active connections: } && a=${a/ }; r=${i[2]# [0-9]* [0-9]* }; echo "Active: $a, requests: $r"

— by azat on June 20, 2014, 3:19 p.m.

Explanation

  • Firstly download nginx statistics
  • IFS - set separator to new line only
  • i=$(i) # convert to *array*
  • a= # get active connections
  • r= # get requests
1

Install profiling versions of all libghc dpkg packages

 $ sudo dpkg -l | grep libghc | grep "\-dev" | cut -d " " -f 3 | tr '\n' ' ' | sed -e 's/\-dev/\-prof/g' | xargs sudo apt-get install --yes

— by openiduser146 on May 26, 2014, 1:14 a.m.

Explanation

dpkg -l lists all installed system packages.

grep libghc filters out all haskell packages

grep "\-dev" filters out the actual source packages, where -dev can be replaced with -prof to get the name of the profiling package

cut -d " " -f 3 converts lines from ii libghc-packagename-dev 0.1.3.3-7 amd64 description to libghc-packagename-dev

tr '\n' ' ' Replaces newlines with spaces, merging it all into one line

sed -e 's/\-dev/\-prof/g' Replaces -dev with -prof

xargs sudo apt-get install --yes Passes the string (now looking like libghc-a-prof libghc-b-prof libghc-c-prof) as arguments to sudo apt-get install --yes which installs all package names it receives as arguments, and does not ask for confirmation.

Limitations

Only works with apt (standard in ubuntu)

1

Compute factorial of positive integer

 $ fac() { (echo 1; seq $1) | paste -s -d\* | bc; }

— by jeroenjanssens on May 21, 2014, 10:55 p.m.

Explanation

This one-liner defines a shell function named fac that computes the factorial of a positive integer. Once this function has been defined (you can put it in your .bashrc), you can use it as follows:

$ fac 10
3628800

Let's break the function down. Assume that we want to compute the factorial of 4. First, it echo's 1, so that the factorial of 0 works correctly (because seq 0 outputs nothing). Then, seq is used to generate a list of numbers:

$ (echo 1; seq 4)
1
1
2
3
4

Then, it uses paste to put these numbers on one line, with * (multiplication) as the seperator:

$ (echo 1; seq 4) | paste -s -d\*
1*1*2*3*4

Finally, it passes this "equation" to bc, which evalutes it:

$ (echo 1; seq 4) | paste -s -d\* | bc
24

The actual function uses $1 so that we can compute the factorial of any positive integer using fac.

1

Extensive "cleanup" operations following "sudo yum upgrade"

 $ sudo yum upgrade && for pkg in $(package-cleanup --orphans -q); do repoquery $(rpm -q $pkg --queryformat="%{NAME}") | grep -q ".*" && echo $pkg; done | xargs sudo yum -y remove && for pkg in $(package-cleanup --leaves --all -q); do repoquery --groupmember $pkg | grep -q "@" || echo $pkg; done

— by openiduser143 on April 16, 2014, 9:58 p.m.

Explanation

"sudo yum upgrade" does clean up outdated packages that the current upgrade replaces, but not other outdated packages or the ones that it willfully skips. Yes, that's what "package-cleanup --orphans" will finish, but "orphaned packages" also include packages that are at their latest version but just aren't updated by the repositories (usually a discrete .rpm installation). This one-liner uses "package-cleanup --orphans" but wraps around it to skip packages that aren't in the repositories anyway and just removes outdated packages that have a newer version in the repositories.

No, it's not at the end yet. It has a final command to display all packages that don't belong to any group. Choose any of the "manual extension" packages which aren't really necessary and only clog the system.

Limitations

  • Specific to only rpm and yum
  • No, not just yum, it requires the yum-utils package (or whatever else provides package-cleanup and repoquery, if anything)
2

Find all files recursively with specified string in the filename and output any lines found containing a different string.

 $ find . -name *conf* -exec grep -Hni 'matching_text' {} \; > matching_text.conf.list

— by n00tz on April 14, 2014, 8:23 p.m.

Explanation

find . -name *conf* In current directory, recursively find all files with 'conf' in the filename.

-exec grep -Hni 'matching_text' {} \; When a file is found matching the find above, execute the grep command to find all lines within the file containing 'matching_text'.

Here are what each of the grep switches do:

grep -i ignore case.

grep -H print the filename

grep -n print the line number

> matching_text.conf.list Direct the grep output to a text file named 'matching_text.conf.list'

5

Displays the quantity of connections to port 80 on a per IP basis

 $ clear;while x=0; do clear;date;echo "";echo "  [Count] | [IP ADDR]";echo "-------------------";netstat -np|grep :80|grep -v LISTEN|awk '{print $5}'|cut -d: -f1|uniq -c; sleep 5;done

— by cesp on April 9, 2014, 5:49 a.m.

Explanation

Uses an infinite loop to display output from netstat, reformatted with grep, awk, and cut piped into uniq to provide the count. Complete with a pretty header. Polls every 5 seconds

1

Get average CPU temperature from all cores.

 $ __=`sensors | grep Core` && echo \(`echo $__ | sed 's/.*+\(.*\).C\(\s\)\+(.*/\1/g' | tr "\n" "+" | head -c-1`\)\/`echo $__ | wc -l` | bc && unset __

— by openiduser139 on April 2, 2014, 10:04 p.m.

Explanation

Uses the "sensors" command and bc along with sed, grep, head, and tr to fetch and calculate the average CPU temperature.

1

Concatenate multiple SSL certificate files to make one PEM file

 $ files=("yourcert.crt" "provider.ca.pem") && for i in ${files[@]} ; do $(cat $i >> yourcert.pem && echo "" >> yourcert.pem) ; done

— by renoirb on April 2, 2014, 5:41 p.m.

Explanation

If you want to concat multiple files, you might end up with cat {a,b,c} >> yourcert.pem in a loop. But the problem is that it doesnt create new line after each cat.

This script is for that matter.

To use, e.g.:

cd /etc/ssl/certs
files=("yourcert.crt" "provider.ca.pem") && for i in ${files[@]} ; do $(cat $i >> yourcert.pem && echo "" >> yourcert.pem) ; done
1

List all non Git comited files and make a gzip archive with them

 $ GITFOLDER="/srv/some/folder"   ls-files --others --exclude-standard | tar czf ${GITFOLDER}-archives/uploads-$(date '+%Y%m%d%H%M').tar.gz -T -

— by renoirb on April 2, 2014, 5:18 p.m.

Explanation

Assuming your web app has a git checkout is in /srv/some/folder (i.e. there is a /srv/some/folder/.git), archive the user uploads to /srv/some/folder-archives with that one liner.

Use:

cd /srv/some/folder
# this one-liner

Limitations

A fully complete script would:

  • Check if $GITFOLDER exists
  • Check if $GITFOLDER has a .git directory
  • Create a temporary (e.g. tmp=$(mktemp)) file to log anything; if [ "$?" -ne 0 ] ; exit with status exit 1, otherwise delete the $tmp file and exit 0.
1

Have script run itself in a virtual terminal

 $ test ! $IN_TERMINAL&&{ export IN_TERMINAL=x; urxvt -hold -e "$0" "$@" & exit; }

— by openiduser111 on March 6, 2014, 3:18 a.m.

Explanation

This can be the first line of a script that will be clicked from a graphical user interface in X to make it open up a virtual terminal to display output. It assumes urxvt and uses the hold option to keep from closing, both of which could be substituted for. It uses a simple flag to check if it has run itself, so note that if a terminal is already open you can set and run export IN_TERMINAL=word; ./scriptname to keep it within the current terminal. Syntactically it's a single line if statement and the brackets are just for grouping. -e feeds to the terminal application the expression of $0which holds the path of the script itself and $@, the entire set of quoted arguments. Replacing what would be a semicolon, the ampersand forks the terminal command to a second process and the launching script exits right away.

Limitations

If the script is large, say several gigabytes and the system tries to make two copies of the script, twice the size of RAM or memory will be needed for loading it.

3

Show 10 Largest Open Files

 $ lsof / | awk '{ if($7 > 1048576) print $7/1048576 "MB" " " $9 " " $1 }' | sort -n -u | tail

— by cellojoe on Feb. 28, 2014, 3:34 a.m.

Explanation

Show the largest 10 currently open files, the size of those files in Megabytes, and the name of the process holding the file open.

0

Extract your external IP address using dig

 $ dig +short myip.opendns.com @resolver1.opendns.com

— by openiduser3 on Feb. 25, 2014, 7:50 a.m.

Explanation

This asks the IP address of myip.opendns.com from the name server resolver1.opendns.com (something you trust), which will return your external IP address.

If you don't have dig, you could use these other services instead:

curl ipecho.net/plain
curl icanhazip.com
curl curlmyip.com
curl l2.io/ip
curl ip.appspot.com
curl ifconfig.me/ip

Limitations

All these methods rely on external services, which might be sometimes temporarily or even permanently down. In that case, find an alternative service.

1

Remove .DS_Store from the repository you happen to staging by mistake

 $ find . -name .DS_Store -exec git rm --ignore-unmatch --cached {} +

— by Kuwana on Feb. 22, 2014, 9:45 a.m.

Explanation

Actual conditions without erasing, remove from the repository.

2

Remove offending key from known_hosts file with one swift move

 $ ssh-keygen -R <hostname>

— by openiduser126 on Jan. 25, 2014, 1:35 p.m.

Explanation

The ssh-keygen tool comes with an option for this already, there is no need for esoteric one-liners which are hard to remember.

Say you ssh server.example.com and its host key has changed because you just reinstalled it. Run ssh-keygen -R server.example.com then try to connect to the server again, you'll be presented with the option to save the host key just like new.

0

Check if a file exists and has a size greater than X

 $ [[ $(find /path/to/file -type f -size +51200c 2>/dev/null) ]] && echo true || echo false

— by openiduser3 on Jan. 9, 2014, 12:34 p.m.

Explanation

  • The find takes care two things at once: checks if file exists and size is greater than 51200.
  • We redirect stderr to /dev/null to hide the error message if the file doesn't exist.
  • The output of find will be non-blank if the file matched both conditions, otherwise it will be blank
  • The [[ ... ]] evaluates to true or false if the output of find is non-blank or blank, respectively

You can use this in if conditions like:

if [[ $(find /path/to/file -type f -size +51200c 2>/dev/null) ]]; do
    somecmd
fi
1

Converts DD/MM/YYYY date format to ISO-8601 (YYYY-MM-DD)

 $ sed 's_\([0-9]\{1,2\}\)/\([0-9]\{1,2\}\)/\([0-9]\{4\}\)_\3-\2-\1_g'

— by laurip on Dec. 30, 2013, 10:30 a.m.

Explanation

Works on dates such as 01/02/1993, 01/10/1991, etc converting them to the superior ISO-8601 date format giving us 1993-02-01 and 1991-10-01 respectively. Test: echo '27/05/1994' | pattern given above Outputs 1994-05-27

Limitations

Currently does not fully convert D/M/YYYY dates such as 1/2/1993 to 1993-02-01, but 1993-2-1

0

Replace sequences of the same characters with a single character

 $ echo heeeeeeelllo | sed 's/\(.\)\1\+/\1/g'

— by openiduser3 on Dec. 11, 2013, 7:58 p.m.

Explanation

That is, this will output "helo".

The interesting thing here is the regular expression in the s/// command of sed:

  • \(.\) -- capture any character
  • \1 -- refers to the last captured string, in our case the previous character. So effectively, \(.\)\1 matches pairs of the same character, for example aa, bb, ??, and so on.
  • \+ -- match one or more of the pattern right before it
  • ... and we replace what we matched with \1, the last captured string, which is the first letter in a sequence like aaaa, or bbbbbbb, or cc.
0

Counting the number of commas in CSV format

 $ perl -ne 'print tr/,//, "\n"' < file.csv | sort -u

— by openiduser3 on Dec. 1, 2013, 1:03 p.m.

Explanation

Sometimes I need to know if a CSV file has the right number of columns, and how many columns there are.

The tr/// operator in perl is normally used to convert a set of characters to another set of characters, but when used in a scalar context like in this example, it returns the number of matches of the specified characters, in this case a comma.

The perl command above prints the number of commas in every line of the input file. sort -u sorts this and outputs only the unique lines. If all lines in the CSV file have the same number of commas, there should be one line of output. The number of columns in the file is this number + 1.

Limitations

This one-liner does not handle the more general case when the columns may have embedded commas within quotes. For that you would need a more sophisticated method. This simple version can still be very useful in many common cases.

0

Count the lines of each file extension in a list of files

 $ git ls-files | xargs wc -l | awk -F ' +|\\.|/' '{ sumlines[$NF] += $2 } END { for (ext in sumlines) print ext, sumlines[ext] }'

— by openiduser3 on Nov. 9, 2013, 11:49 a.m.

Explanation

The pipeline:

  • git ls-files -- produces the list of files in a Git repository. It could be anything else that produces a list of filenames, for example: find . -type f
  • xargs wc -l -- run wc -l to count the lines in the filenames coming from standard input. The output is the line count and the filename
  • The final awk command does the main work: extract the extension name and sum the line counts:
  • -F ' +|\\.|/' -- use as field separator multiples of spaces, or a dot, or a slash
  • { sumlines[$NF] += $2 } -- $NF contains the value of the last field, which is the filename extension, thanks to the dot in the field separator, and $2 contains the value of the second field in the input, which is the line count. As a result, we are building the sumlines associative array, summing up the line counts of files with the same extension
  • END { for (ext in sumlines) print ext, sumlines[ext] }' -- After all lines have been processed, print the extension and the line count.
0

Add all unknown files in a Subversion checkout

 $ svn add . --force

— by openiduser3 on Sept. 24, 2013, 7:59 a.m.

Explanation

Adding all unknown files in a working tree is usually very simple in other version control systems, for example:

git add .
bzr add

Not so simple in Subversion:

$ svn add .
svn: warning: '.' is already under version control

But if you add the --force flag, that will do!

Keep in mind that this is not the same as:

svn add * --force

That would add not only unknown files, but ignored files too, which is probably not your intention. Make sure to specify directories explicitly, avoid using * with this command.

0

Find files that are not executable

 $ find /some/path -type f ! -perm -111 -ls

— by openiduser3 on Sept. 18, 2013, 9:14 p.m.

Explanation

The key is writing the parameter of -perm correctly. The value -111 means that all execution bits must be set: user and group and other too. By negating this pattern with ! we get files that miss any of the execution bits.

If you want to be more specific, for example find files that are not executable specifically by the owner, you could do like this:

find /some/path -type f ! -perm -100 -ls

The -ls option is to print the found files using a long listing format similar to the ls command.

0

Find which log files contain or don't contain a specific error message

 $ for i in *.log; do grep OutOfMemo $i >/dev/null && echo $i oom || echo $i ok; done

— by openiduser3 on Sept. 13, 2013, 3:43 p.m.

Explanation

In this example I was looking for a list of log files which contain or don't contain a stack trace of OutOfMemoryError events.

  • for i in *.log is to loop over the list of files.
  • For each file, I run grep, but redirect the output to /dev/null, as I don't need that, I just want to see a "yes or no" kind of summary for each file
  • grep exits with success if it found any matching lines, otherwise with failure. Using the pattern cmd && success || failure, I echo the filename and the text "oom" in case of a match, or "ok" otherwise

Remarks:

  • Using grep -q is equivalent to redirecting output to /dev/null, but might not be supported in all systems
  • grep -l can be used to list files with matches, and grep -L to list files without matches, but the latter does not exist in some implementations of grep, such as BSD
  • I realized it a bit late, but grep -c shows a count of the matches, so actually it could have been a suitable and simpler solution
1

Convert text from decimal to little endian hexadecimal

 $ echo $(printf %08X 256 | grep -o .. | tac | tr -d '\n')

— by openiduser111 on Aug. 21, 2013, 8:44 p.m.

Explanation

example of 256
printf %08X produces the 8 characters 00000100
grep breaks string by two characters
tac reverses
tr 00010000

Limitations

could be put in a loop like this
for A in $(printf %08X'\n' 256 255); do echo $A | grep -o .. | tac | tr -d '\n'; done