2018

# tar jcvf adam.nz.20171015.tar.bz --exclude='data/tmp' --exclude='data/cache' adam.nz/

Backup a DokuWiki document root but exclude temp and cache files. Note that excludes are relative paths from the point of the directory being backed up, so we're excluding 'adam.nz/data/tmp' & 'adam.nz/data/cache'.

# osascript -e 'display notification "Bind is not responding." with title "kahu.shand.net"'

Displays a macOS notification (can change it to an alert by setting the type of alert for “Script Editor” in “System Preferences - Notifications”.

# flaunt() { egrep --color "($1|$)"; }

Bash function (eg. for ~/.bash_profile) to highlight any matching text.
Usage: apt-cache –names-only search redis | flaunt ^redis

# curl -s elasticsearch.spack.org:9200/_cluster/health | python -m json.tool

Retrieve cluster health from an Elastic Search node and pretty print the JSON result using Python.

# defaults write com.apple.Safari IncludeInternalDebugMenu 1

Enable the Debug menu in Safari. You can use “Debug – Media Flags – Disable Inline Video” to stop vides from autoplaying.

# pip install jupyter --user python 

Install the Python package Jupyter using PIP. This works even though jupyter has dependencies which require upgrading the builtin macOS setuptools (which aren't upgradable because of SIP).

# gpg --armor --export adam@shand.net | pbcopy

Export my GnuPG public key and add it to the paste buffer (so I can cmd-v it somewhere else).

# sudo easy_install pip

How to best install PIP on macOS.

# wget --quiet  -O - http://www.drivelive.nz/kapiti | hxclean | hxselect div#62.toggle-table | hxselect -ic span.time-text | hxremove i | awk -v W=13 -v P=14 '{print $W", "$P}' 

Download a web page to stdout, select only the HTML within the div with id “62.toggle-table”, print the content within the spans with a class of “time-text”, remove all the italics elements and print out the 13th and 14th items on the remaining text list.

# wget --quiet  -O - http://www.drivelive.nz/kapiti | hxclean | hxselect div#62.toggle-table | hxpipe | awk -F\- '/[0-9]mins / {print $2}' 

Download a web page to stdout, select only the HTML within a particular div and then convert the HTML to an easier format to use awk on.

# wget --quiet  -O - http://www.drivelive.nz/kapiti | hxnormalize -x | hxselect div#62.toggle-table | hxaddid span.time-text | hxselect -ic span.time-text | hxprune -x -c "" | hxselect -ic p

Download a web page to stdout, number all the spans with a class of “time-text” and pull out the text within the span's with a class of “time-text”

# find . -name "*jpg" -size +1M -exec mogrify -geometry 1024x1024 {} \;

Find all files ending in *jpg which are greater than 1MB in size and resize them so that the longest dimension is 1024.

# zip -r /tmp/backup.zip web/

Recursively zip the contents of web/* into a file called backup.zip

# find uploads -type f | zip /tmp/uploads.zip -@

Zip the list of files that come from standard in.

# awk 'BEGIN {FS="/?(code|WRAP>)"}  {print $0}' 

Sets the field delimiter (same as awk -F) to a regular expression which matches code> or /code> or WRAP> or /WRAP>

# egrep --color "(foo|$)" 

Passes through all lines (doesn't filter anything out) but colors any instance of “foo”.


2014 by adam shand. sharing is an act of love, please share.