We collect practical, well-explained Bash one-liners, and promote best practices in Bash shell scripting. To get the latest Bash one-liners, follow @bashoneliners on Twitter. If you find any problems, report a bug on GitHub.

Tags

0

Remove new lines from files and folders

 $ rename 's/[\r\n]//g' *

— by moverperfect on Sept. 30, 2017, 10:07 p.m.

Explanation

This will search all files and folders in the current directory for any with a new line character in them and remove the new line out of the file/folder.

0

Kill a process running on port 8080

 $ lsof -i :8080 | awk 'NR > 1 {print $2}' | xargs --no-run-if-empty kill

— by Janos on Sept. 1, 2017, 8:31 p.m.

Explanation

lsof lists open files (ls-o-f, get it?). lsof -i :8080 lists open files on address ending in :8080. The output looks like this

COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
chrome  2619 qymspace  149u  IPv4  71595      0t0  TCP localhost:53878->localhost:http-alt (CLOSE_WAIT)`

We use awk 'NR > 1 {print $2}' to print the second column for lines except the first. The result is a list of PIDs, which we pipe to xargs kill to kill.

Limitations

The --no-run-if-empty option of xargs is available in GNU implementations, and typically not available in BSD implementations. Without this option, the one-liner will raise an error if there are no matches (no PIDs to kill).

0

Kill a process running on port 8080

 $ lsof -i :8080 | awk '{print $2}' | tail -n 1 | xargs kill

— by kimbethwel on Aug. 18, 2017, 8:22 a.m.

Explanation

lsof lists open files (ls-o-f, get it?). lsof -i :8080 lists open files on address ending in :8080. The output looks like this

COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
chrome  2619 qymspace  149u  IPv4  71595      0t0  TCP localhost:53878->localhost:http-alt (CLOSE_WAIT)`

We pipe this input through awk to print column 2 using the command awk '{print $2}' to produce the output:

PID
2533

To remote the word PID from this output we use tail -n 1 to grab the last row 2533,

We can now pass this process id to the kill command to kill it.

0

Preserve your fingers from cd ..; cd ..; cd..; cd..;

 $ up(){ DEEP=$1; for i in $(seq 1 ${DEEP:-"1"}); do cd ../; done; }

— by alireza6677 on June 28, 2017, 5:40 p.m.

Explanation

Include this function in your .bashrc

Now you are able to go back in your path simply with up N. So, for example:

Z:~$ cd /var/lib/apache2/fastcgi/dynamic/
Z:/var/lib/apache2/fastcgi/dynamic$ up 2
Z:/var/lib/apache2$ up 3
Z:/$

0

Get the HTTP status code of a URL

 $ curl -Lw '%{http_code}' -s -o /dev/null -I SOME_URL

— by Janos on June 19, 2017, 11:15 p.m.

Explanation

  • -w '%{http_code}' is to print out the status code (the meat of this post)
  • -s is to make curl silent (suppress download progress stats output)
  • -o /dev/null is to redirect all output to /dev/null
  • -I is to fetch the headers only, no need for the page content
  • -L is to follow redirects

0

Corporate random bullshit generator (cbsg)

 $ curl -s http://cbsg.sourceforge.net/cgi-bin/live | grep -Eo '^<li>.*</li>' | sed s,\</\\?li\>,,g | shuf -n 1 | cowsay

— by Jab2870 on June 7, 2017, 4:11 p.m.

Explanation

Lets make a cow talk BS

Limitations

I don't think cowsay is installed by default on a mac although it can be installed with brew cowsay

0

List the content of a GitHub repository without cloning it

 $ svn ls https://github.com/user/repo/trunk/some/path

— by Janos on May 21, 2017, 6:01 p.m.

Explanation

Git doesn't allow querying sub-directories of a repository. But GitHub repositories are also exposed as Subversion repositories, and Subversion allows arbitrary path queries using the ls command.

Notice the /trunk/ between the base URL of the repository and the path to query. This is due to the way GitHub provides Subversion using a standard Subversion repository layout, with trunk, branches and tags sub-directories.

0

Create an array of CPU frequencies in GHz

 $ cpus=($({ echo scale=2; awk '/cpu MHz/ {print $4 " / 1000"}' /proc/cpuinfo; } | bc))

— by openiduser146 on Dec. 28, 2015, 9:02 p.m.

Explanation

  • The awk command takes the input from /proc/cpuinfo, matches lines containing "cpu MHz", and appends the " / 1000" to the CPU frequency, so it's ready for piping to bc
  • The echo scale=2 is for bc, to get floating point numbers with a precision of maximum two decimal points
  • Group the echo scale=2 and the awk for piping to bc, by enclosing the commands within { ...; }
  • Run the commands in a $(...) subshell
  • Wrap the subshell within (...) to store the output lines as an array

From the cpus array, you can extract the individual CPU values with:

cpu0=${cpus[0]}
cpu1=${cpus[1]}
cpu2=${cpus[2]}
cpu3=${cpus[3]}

If you don't need the values in GHz, but MHz is enough, then the command is a lot simpler:

cpus=($(awk '/cpu MHz/ {print $4}' /proc/cpuinfo))

Limitations

Arrays are Bash specific, might not work in older /bin/sh.

/proc/cpuinfo exists only in Linux.

0

Test git archive before actually creating an archive // fake dry run

 $ git archive master some/project/subdir | tar t

— by openiduser146 on Dec. 22, 2015, 2:29 p.m.

Explanation

git archive doesn't have a --dry-run flag, and it would be nice to see what files would be in the archive before actually creating it.

  • git archive master some/project/subdir
  • Create an archive from the master branch, with only a specified sub-directory of the project in it (instead of the entire repo)
  • Note: without specifying a file, the archive is dumped to standard output
  • tar t : the t flag of tar is to list the content of an archive. In this example the content comes from standard input (piped from the previous command)

In other words, this command creates an archive without ever saving it in a file, and uses tar t to list the contents. If the output looks good, then you can create the archive with:

git archive master -o file.tar some/project/subdir

0

Shuffle lines

 $ seq 5 | shuf

— by openiduser184 on March 12, 2015, 7:58 a.m.

Explanation

shuf is part of the textutils package of GNU Core Utilities and should be available on most systems.

0

Download a file from a webserver with telnet

 $ (echo 'GET /'; echo; sleep 1; ) | telnet www.google.com 80

— by Janos on Dec. 22, 2014, 11:31 p.m.

Explanation

If you are ever in a minimal headless *nix which doesn't have any command line utilities for downloading files (no curl, wget, lynx) but you have telnet, then this can be a workaround.

Another option is netcat:

/usr/bin/printf 'GET / \n' | nc www.google.com 80

Credit goes to this post: http://unix.stackexchange.com/a/83987/17433

0

Print the window title of current mpv session to display what is playing

 $ wmctrl -pl | grep $(pidof mpv) | cut -d- -f2-

— by openiduser171 on Dec. 15, 2014, 3:37 a.m.

Explanation

wmctrl -l lists all open windows (works with several window managers), -p includes the unique process ID of each window in the list. grep $(pidof mpv) matches the line that contains the process ID of mpv. cut -d'-' -f2- prints everything after the the first delimiter '-' (from the second onwards), which just leaves the title bit.

Limitations

Only works with one instance of mpv running. It's intended use is to share what film or series you are watching and you don't usually watch more than one thing at a time.

0

Shuffle lines

 $ ... | perl -MList::Util -e 'print List::Util::shuffle <>'

— by Janos on Oct. 25, 2014, 10:40 p.m.

Explanation

Sorting lines is easy: everybody knows the sort command.

But what if you want to do the other way around? The above perl one-liner does just that:

  • -MList::Util load the List::Util module (as if doing use List::Util inside a Perl script)
  • -e '...' execute Perl command
  • print List::Util::shuffle <> call List::Util::shuffle for the lines coming from standard input, read by <>

Another way would be sort -R if your version supports that (GNU, as opposed to BSD). In BSD systems you can install coreutils and try gsort -R instead. (For eample on OSX, using MacPorts: sudo port install coreutils.)

0

Open Windows internet shortcut (*.url) files in firefox

 $ firefox $(grep -i ^url='*' file.url | cut -b 5-)

— by tsjswimmer on Sept. 11, 2014, 10:03 a.m.

Explanation

Extract urls from a *.url file and open in Firefox. (Note that *.url files in Windows are basically just text files, so they can be parsed with a few commands.)

  • grep extracts lines starting with url=
  • The -i flag is to ignore case
  • cut extracts the range of characters from the 5th until the end of lines
  • The output of $(...) will be used as command line parameters for Firefox

Limitations

This only works with URLs that don't contain special characters that would be interpreted by the shell, such as spaces and others.

0

Find recent logs that contain the string "Exception"

 $ find . -name '*.log' -mtime -2 -exec grep -Hc Exception {} \; | grep -v :0$

— by Janos on July 19, 2014, 7:53 a.m.

Explanation

The find:

  • -name '*.log' -- match files ending with .log
  • -mtime -2 -- match files modified within the last 2 days
  • -exec CMD ARGS \; -- for each file found, execute command, where {} in ARGS will be replaced with the file's path

The grep:

  • -c is to print the count of the matches instead of the matches themselves
  • -H is to print the name of the file, as grep normally won't print it when there is only one filename argument
  • The output lines will be in the format path:count. Files that didn't match "Exception" will still be printed, with 0 as count
  • The second grep filters the output of the first, excluding lines that end with :0 (= the files that didn't contain matches)

Extra tips:

  • Change "Exception" to the typical relevant failure indicator of your application
  • Add -i for grep to make the search case insensitive
  • To make the find match strictly only files, add -type f
  • Schedule this as a periodic job, and pipe the output to a mailer, for example | mailx -s 'error counts' yourmail@example.com

Limitations

The -H flag of grep may not work in older operating systems, for example older Solaris. In that case use ggrep (GNU grep) instead, if it exists.

0

Check if a file exists and has a size greater than X

 $ [[ $(find /path/to/file -type f -size +51200c 2>/dev/null) ]] && echo true || echo false

— by Janos on Jan. 9, 2014, 12:34 p.m.

Explanation

  • The find takes care two things at once: checks if file exists and size is greater than 51200.
  • We redirect stderr to /dev/null to hide the error message if the file doesn't exist.
  • The output of find will be non-blank if the file matched both conditions, otherwise it will be blank
  • The [[ ... ]] evaluates to true or false if the output of find is non-blank or blank, respectively

You can use this in if conditions like:

if [[ $(find /path/to/file -type f -size +51200c 2>/dev/null) ]]; do
    somecmd
fi

0

Replace sequences of the same characters with a single character

 $ echo heeeeeeelllo | sed 's/\(.\)\1\+/\1/g'

— by Janos on Dec. 11, 2013, 7:58 p.m.

Explanation

That is, this will output "helo".

The interesting thing here is the regular expression in the s/// command of sed:

  • \(.\) -- capture any character
  • \1 -- refers to the last captured string, in our case the previous character. So effectively, \(.\)\1 matches pairs of the same character, for example aa, bb, ??, and so on.
  • \+ -- match one or more of the pattern right before it
  • ... and we replace what we matched with \1, the last captured string, which is the first letter in a sequence like aaaa, or bbbbbbb, or cc.

0

Counting the number of commas in CSV format

 $ perl -ne 'print tr/,//, "\n"' < file.csv | sort -u

— by Janos on Dec. 1, 2013, 1:03 p.m.

Explanation

Sometimes I need to know if a CSV file has the right number of columns, and how many columns there are.

The tr/// operator in perl is normally used to convert a set of characters to another set of characters, but when used in a scalar context like in this example, it returns the number of matches of the specified characters, in this case a comma.

The perl command above prints the number of commas in every line of the input file. sort -u sorts this and outputs only the unique lines. If all lines in the CSV file have the same number of commas, there should be one line of output. The number of columns in the file is this number + 1.

Limitations

This one-liner does not handle the more general case when the columns may have embedded commas within quotes. For that you would need a more sophisticated method. This simple version can still be very useful in many common cases.

0

Count the lines of each file extension in a list of files

 $ git ls-files | xargs wc -l | awk -F ' +|\\.|/' '{ sumlines[$NF] += $2 } END { for (ext in sumlines) print ext, sumlines[ext] }'

— by Janos on Nov. 9, 2013, 11:49 a.m.

Explanation

The pipeline:

  • git ls-files -- produces the list of files in a Git repository. It could be anything else that produces a list of filenames, for example: find . -type f
  • xargs wc -l -- run wc -l to count the lines in the filenames coming from standard input. The output is the line count and the filename
  • The final awk command does the main work: extract the extension name and sum the line counts:
  • -F ' +|\\.|/' -- use as field separator multiples of spaces, or a dot, or a slash
  • { sumlines[$NF] += $2 } -- $NF contains the value of the last field, which is the filename extension, thanks to the dot in the field separator, and $2 contains the value of the second field in the input, which is the line count. As a result, we are building the sumlines associative array, summing up the line counts of files with the same extension
  • END { for (ext in sumlines) print ext, sumlines[ext] }' -- After all lines have been processed, print the extension and the line count.

0

Add all unknown files in a Subversion checkout

 $ svn add . --force

— by Janos on Sept. 24, 2013, 7:59 a.m.

Explanation

Adding all unknown files in a working tree is usually very simple in other version control systems, for example:

git add .
bzr add

Not so simple in Subversion:

$ svn add .
svn: warning: '.' is already under version control

But if you add the --force flag, that will do!

Keep in mind that this is not the same as:

svn add * --force

That would add not only unknown files, but ignored files too, which is probably not your intention. Make sure to specify directories explicitly, avoid using * with this command.

0

Find files that are not executable

 $ find /some/path -type f ! -perm -111 -ls

— by Janos on Sept. 18, 2013, 9:14 p.m.

Explanation

The key is writing the parameter of -perm correctly. The value -111 means that all execution bits must be set: user and group and other too. By negating this pattern with ! we get files that miss any of the execution bits.

If you want to be more specific, for example find files that are not executable specifically by the owner, you could do like this:

find /some/path -type f ! -perm -100 -ls

The -ls option is to print the found files using a long listing format similar to the ls command.

0

Find which log files contain or don't contain a specific error message

 $ for i in *.log; do grep OutOfMemo $i >/dev/null && echo $i oom || echo $i ok; done

— by Janos on Sept. 13, 2013, 3:43 p.m.

Explanation

In this example I was looking for a list of log files which contain or don't contain a stack trace of OutOfMemoryError events.

  • for i in *.log is to loop over the list of files.
  • For each file, I run grep, but redirect the output to /dev/null, as I don't need that, I just want to see a "yes or no" kind of summary for each file
  • grep exits with success if it found any matching lines, otherwise with failure. Using the pattern cmd && success || failure, I echo the filename and the text "oom" in case of a match, or "ok" otherwise

Remarks:

  • Using grep -q is equivalent to redirecting output to /dev/null, but might not be supported in all systems
  • grep -l can be used to list files with matches, and grep -L to list files without matches, but the latter does not exist in some implementations of grep, such as BSD
  • I realized it a bit late, but grep -c shows a count of the matches, so actually it could have been a suitable and simpler solution

0

Create a transparent image of given dimensions

 $ convert -size 100x100 xc:none transparency.png

— by Janos on July 31, 2013, 11:32 p.m.

Explanation

  • convert is a tool that's part of the ImageMagick image manipulation library
  • -size 100x100 specifies the dimensions of the image to create
  • xc:none is a symbolic source image, indicating to convert "from nothing"
  • transparency.png is the destination filename, the image format is automatically determined by the extension

Limitations

Requires the ImageMagick image manipulation library.

0

Create a heap dump of a Java process

 $ jmap -dump:format=b,file=/var/tmp/dump.hprof 1234

— by Janos on July 8, 2013, 8:18 a.m.

Explanation

  • Create a heap dump from the running Java process with PID=1234
  • The heap dump will be saved in /var/tmp/dump.hprof in binary format
  • You can open the dump with "MAT", the Memory Analyzer Tool (based on Eclipse) and identify objects that use most of the memory and potential memory leaks

For more options see jmap -h

0

Insert lines from one text file to another one

 $ awk 'NR % 10 == 1 {getline f2 < "file1"; print f2} 1' file2 | cat -n

— by openiduser102 on June 22, 2013, 9:30 a.m.

Explanation

An alternative with line numbers.