Tag Archives: bash

bash脚本小记1

1. Bash中的数组
注意如果你头部使用了”#!/bin/sh”可能会不支持. 这个时候你需要改成”#!/bin/bash”
在bash中定义一个数组 myarray = (a b c)
里面每个元素之间用空格隔开.
计算当前数组的数量, ${#myarray[@]}
获取当前数组所以元素 ${myarray[@]}
或者其中某一个元素 ${myarray[index]} 例如第一个元素即${myarray[0]}

遍历数据

for el in "${myarray[@]}"; do echo "$el"; done 

1.1 数组转成字符串(相当于 join, concat这类js函数)

FOO=( a b c )
SAVE_IFS=$IFS
IFS=","
FOOJOIN="${FOO[*]}"
IFS=$SAVE_IFS
echo $FOOJOIN

2. Mysql
使用mysql命令行 将select的数据转成数据
只需要

MYSQL_BIN=`which mysql`
DUMPARGS="-u $DB_USER -p$DB_PASSWD --skip-triggers --compact --skip-extended-insert --no-create-info $DATABASE"
list=($($MYSQL_BIN $QUERY_ARGS 'SELECT aid FROM dede_addonarticle order by aid DESC LIMIT 0, 1000'))

echo ${#list[@]}
echo ${list[@]}

3 Mysqldump

mysqldump -u $DB_USER -p$DB_PASSWD --skip-triggers --compact --skip-extended-insert --no-create-info $DATABASE   table --where=""

chmod 644 for files and 755 for directories

将path路径下的所有文件夹改成755

find path -d | xargs chmod -v 755

将path路径下的所有文件改成644属性

find path -f | xargs chmod -v 644

定时备份mysql数据

#!/bin/bash

DB_USER=root
DB_PASSWD=""
#需要备份的数据库名
DATABASES="bigamer bigamer_ucenter bigamer_anwsion bigamer_passport"
BACKUP_DIR="/home/backup/mysql/"
DATE=`date '+%Y%m%d'`
DUMPFILE=$DATE.sql
ARCHIVEFILE=$DUMPFILE.tar.gz
DUMP_ARGS="-u $DB_USER -p $DB_PASSWD --add-drop-table --add-drop-database -B $DATABASES"
DUMP_BIN=`which mysqldump`

if [ ! -d $BACKUP_DIR ]; then
mkdir -p "$BACKUP_DIR"
fi

cd $BACKUP_DIR

$DUMP_BIN $DUMP_ARGS > $DUMPFILE

if [[ $? == 0 ]]; then
tar czf $ARCHIVEFILE $DUMPFILE
    rm -f $DUMPFILE
fi

#清理五天前备份的数据
find $BACKUP_DIR -name "*.sql.tar.gz" -type f -mtime +5 -exec rm {} \; > /dev/null 2>&1

echo "Backup Process Done"

Git create a archive

git archive --prefix=prefixDir -9 -o outputname.fmt(zip, tar.gz, tar, tgz)

RTFM!

The bash script of downloading the graphics programmming book

Download the <>, by Micheal Abrash

wget http://twimgs.com/ddj/abrashblackbook/gpbb{0..70}.pdf

or

curl -O http://twimgs.com/ddj/abrashblackbook/gpbb{0..70}.pdf

also, you can use the command below:

wget -r -l1 -H -t1 -nd -N -np -A.pdf -erobots=off http://drdobbs.com/high-performance-computing/184404919