JavaScript sleep Function

// https://zeit.co/blog/async-and-await
function sleep (time) {
  return new Promise((resolve) => setTimeout(resolve, time));
}

// Usage!
sleep(500).then(() => {
    // Do something after the sleep!
})

Linux and Curl: How to use Bash to Read a File Line by Line and Execute Curl command to get HTTP Response Code

for URL in `cat crunchify.txt`; do echo $URL; curl -m 10 -s -I $1 "$URL" | grep HTTP/1.1 |  awk {'print $2'}; done

-m: Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours due to any network issue

-s: show error message

-I: Show document info only

awk: matches the pattern and prints result

30 Handy Bash Shell Aliases For Linux / Unix / Mac OS X

### Get os name via uname ###
_myos="$(uname)"
 
### add alias as per os using $_myos ###
case $_myos in
   Linux) alias foo='/path/to/linux/bin/foo';;
   FreeBSD|OpenBSD) alias foo='/path/to/bsd/bin/foo' ;;
   SunOS) alias foo='/path/to/sunos/bin/foo' ;;
   *) ;;
esac

alias c='clear'

# if user is not root, pass all commands via sudo #
if [ $UID -ne 0 ]; then
    alias reboot='sudo reboot'
    alias update='sudo apt-get upgrade'
fi

## Colorize the ls output ##
alias ls='ls --color=auto'
 
## Use a long listing format ##
alias ll='ls -la'
 
## Show hidden files ##
alias l.='ls -d .* --color=auto'


## get rid of command not found ##
alias cd..='cd ..'
 
## a quick way to get out of current directory ##
alias ..='cd ..'
alias ...='cd ../../../'
alias ....='cd ../../../../'
alias .....='cd ../../../../'
alias .4='cd ../../../../'
alias .5='cd ../../../../..'


## Colorize the grep command output for ease of use (good for log files)##
alias grep='grep --color=auto'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'

alias bc='bc -l'

alias sha1='openssl sha1'

alias mkdir='mkdir -pv'

# install  colordiff package 🙂
alias diff='colordiff'

alias mount='mount |column -t'


# handy short cuts #
alias h='history'
alias j='jobs -l'


alias path='echo -e ${PATH//:/\\n}'
alias now='date +"%T"'
alias nowtime=now
alias nowdate='date +"%d-%m-%Y"'


alias vi=vim
alias svi='sudo vi'
alias vis='vim "+set si"'
alias edit='vim'

# Stop after sending count ECHO_REQUEST packets #
alias ping='ping -c 5'
# Do not wait interval 1 second, go fast #
alias fastping='ping -c 100 -s.2'

alias ports='netstat -tulanp'

## replace mac with your actual server mac address #
alias wakeupnas01='/usr/bin/wakeonlan 00:11:32:11:15:FC'
alias wakeupnas02='/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03='/usr/bin/wakeonlan 00:11:32:11:15:FE'

## shortcut  for iptables and pass it via sudo#
alias ipt='sudo /sbin/iptables'
 
# display all rules #
alias iptlist='sudo /sbin/iptables -L -n -v --line-numbers'
alias iptlistin='sudo /sbin/iptables -L INPUT -n -v --line-numbers'
alias iptlistout='sudo /sbin/iptables -L OUTPUT -n -v --line-numbers'
alias iptlistfw='sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall=iptlist

# get web server headers #
alias header='curl -I'
 
# find out if remote server supports gzip / mod_deflate or not #
alias headerc='curl -I --compress'

# do not delete / or prompt if deleting more than 3 files at a time #
alias rm='rm -I --preserve-root'
 
# confirmation #
alias mv='mv -i'
alias cp='cp -i'
alias ln='ln -i'
 
# Parenting changing perms on / #
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

# distro specific  - Debian / Ubuntu and friends #
# install with apt-get
alias apt-get="sudo apt-get"
alias updatey="sudo apt-get --yes"
 
# update on one command
alias update='sudo apt-get update && sudo apt-get upgrade'

# become root #
alias root='sudo -i'
alias su='sudo -i'


# reboot / halt / poweroff
alias reboot='sudo /sbin/reboot'
alias poweroff='sudo /sbin/poweroff'
alias halt='sudo /sbin/halt'
alias shutdown='sudo /sbin/shutdown'


# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload='sudo /usr/local/nginx/sbin/nginx -s reload'
alias nginxtest='sudo /usr/local/nginx/sbin/nginx -t'
alias lightyload='sudo /etc/init.d/lighttpd reload'
alias lightytest='sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -t'
alias httpdreload='sudo /usr/sbin/apachectl -k graceful'
alias httpdtest='sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'


## pass options to free ##
alias meminfo='free -m -l -t'
 
## get top process eating memory
alias psmem='ps auxf | sort -nr -k 4'
alias psmem10='ps auxf | sort -nr -k 4 | head -10'
 
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
 
## Get server cpu info ##
alias cpuinfo='lscpu'
 
## older system use /proc/cpuinfo ##
##alias cpuinfo='less /proc/cpuinfo' ##
 
## get GPU ram on desktop / laptop##
alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'

## this one saved by butt so many times ##
alias wget='wget -c'

## set some other defaults ##
alias df='df -H'
alias du='du -ch'
 
# top is atop, just like vi is vim
alias top='atop'



Centos7.3配置Supervisor遇到的一些小问题

首先安装:

yum install -y supervisor

然后创建扩展配置文件:

cd /etc/supervisord.d
vi laravel-worker.ini

写入以下内容:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /你的Linux完整项目地址/artisan queue:work redis --sleep=3 --tries=3
autostart=true
autorestart=true
user=www
numprocs=8
redirect_stderr=true
stdout_logfile=/www/wwwroot/app/worker.log

以下是说明:

command:XX 程序启动命令
autostart=true ;在supervisord启动的时候也自动启动
autorestart=true ;程序退出后自动重启,可选值:[unexpected,true,false],默认为unexpected,表示进程意外杀死后才重启
user=www ;用哪个用户启动进程,默认是root
numprocs=8 ;指定 Supervisor 运行 8 个 queue:work 进程并且监管它们,
redirect_stderr=true ;stderr重定向到stdout,默认false
stdout_logfile=/www/wwwroot/app/worker.log ;stdout 日志文件,需要注意当指定目录不存在时无法正常启动,所以需要手动创建目录(supervisord 会自动创建日志文件)

执行命令启动Supervisor:

执行:

sudo supervisorctl reread

但是报错:

error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib64/python2.7/socket.py line: 224

执行:

supervisord

再执行:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*

OK了

MySQL – Convert all Tables of one or more Database(s) from MyISAM to InnoDB

To convert all DBNAME’s InnoDB Tables to MyISAM

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME,' ENGINE=MyISAM;') 
FROM Information_schema.TABLES WHERE TABLE_SCHEMA = 'DBNAME' AND ENGINE = 'InnoDB' AND TABLE_TYPE = 'BASE TABLE'

To convert all DBNAME’s MyISAM Tables to InnoDB

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME,' ENGINE=InnoDB;') 
FROM Information_schema.TABLES WHERE TABLE_SCHEMA = 'DBNAME' AND ENGINE = 'MyISAM' AND TABLE_TYPE = 'BASE TABLE'

To convert all InnoDB Tables to MyISAM (all databases)

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME, ' engine=MyISAM;') 
FROM information_schema.TABLES WHERE ENGINE = 'InnoDB';

To convert all MyISAM Tables to InnoDB (all databases)

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME, ' engine=InnoDB;') 
FROM information_schema.TABLES WHERE ENGINE = 'MyISAM';

MySQL any way to import a huge (32 GB) sql dump faster?

You definitely need to change the following

innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 256M
innodb_log_file_size = 1G
innodb_write_io_threads = 16
innodb_flush_log_at_trx_commit = 0

Why these settings ?

innodb_buffer_pool_size will cache frequently read data
innodb_log_buffer_size : Larger buffer reduces write I/O to Transaction Logs
innodb_log_file_size : Larger log file reduces checkpointing and write I/O
innodb_write_io_threads : Service Write Operations to .ibd files. According to MySQL Documentation on Configuring the Number of Background InnoDB I/O Threads, each thread can handle up to 256 pending I/O requests. Default for MySQL is 4, 8 for Percona Server. Max is 64.
innodb_flush_log_at_trx_commit
In the event of a crash, both 0 and 2 can lose once second of data.
The tradeoff is that both 0 and 2 increase write performance.
I choose 0 over 2 because 0 flushes the InnoDB Log Buffer to the Transaction Logs (ib_logfile0, ib_logfile1) once per second, with or without a commit. Setting 2 flushes the InnoDB Log Buffer only on commit. There are other advantages to setting 0 mentioned by @jynus, a former Percona instructor.
Restart mysql like this

service mysql restart --innodb-doublewrite=0

This disables the InnoDB Double Write Buffer

Import your data. When done, restart mysql normally

service mysql restart

This reenables the InnoDB Double Write Buffer

Give it a Try !!!

SIDE NOTE : You should upgrade to 5.6.21 for latest security patches.

Checking and repairing mysql replication automatically

MySQL replication has been known to easily break, as a result of a large multitude of potential causes.

Sometimes the replication can even break if an erroneous query is executed on the master server.

With all the potential issues that may break replication, we thought it prudent to write an automated check script that can run on a scheduled basis (i.e. every 10-15 minutes), check the Slave status, report on any errors if applicable and attempt to repair replication.

We have built this script to exit and send mail alerts if any step of the checking and repairing process fails or generates an error in itself.

The script also generates a lock file to ensure that no more than one check process can run at any given time. We feel this script could be best used for scenarios for remote MySQL slaves, for example. Adding this extra layer may ensure a more reliable replication.

The repair process is simply 3 MySQL Commands :

stop slave;
reset slave;
slave start;

The above directives assume that you have a master.info with the mysql master server information statically set. No CHANGE MASTER commands have to be executed as a result. Resetting the slave clears the error and resumes replication, and all the queries missed during the time it failed should be queued and applied after it starts again.

Here is the script :

#!/bin/sh
# Slave replication auto recovery and alert
# Star Dot Hosting 2012

currentmonth=`date "+%Y-%m-%d"`
lock_file=/tmp/slave_alert.lck

echo "MySQL Replication Check Script" > /var/log/replication_check.log 2>&1
echo "------------------------------" >> /var/log/replication_check.log 2>&1
echo "$currentmonth" >> /var/log/replication_check.log 2>&1
echo "" >> /var/log/replication_check.log 2>&1


# Check if lock file exists
if [ -f $lock_file ];
then
        echo "Lock file exists! Possible conflict!" >> /var/log/replication_check.log 2>&1
        mail_alert
        exit 1
else
        touch $lock_file
fi

# Fix slave
function fix_replication () {
        mysql -u root --password="XXXXX" -Bse "stop slave" >> /var/log/replication_check.log 2>&1
        if [ "$?" -eq 0 ];
        then
                echo "Stop slave succeeded..." >> /var/log/replication_check.log 2>&1
        else
                echo "Slave recover function failed" >> /var/log/replication_check.log 2>&1
                mail_alert
                exit 1
        fi
        mysql -u root --password="XXXXX" -Bse "reset slave" >> /var/log/replication_check.log 2>&1
        if [ "$?" -eq 0 ];
        then
                echo "Reset slave succeeded..." >> /var/log/replication_check.log 2>&1
        else
                echo "Slave recover function failed" >> /var/log/replication_check.log 2>&1
                mail_alert

                exit 1
        fi
        mysql -u root --password="XXXXX" -Bse "slave start" >> /var/log/replication_check.log 2>&1
        if [ "$?" -eq 0 ];
        then
                echo "Slave start succeeded." >> /var/log/replication_check.log 2>&1
        else
                echo "Slave recover function failed" >> /var/log/replication_check.log 2>&1
                mail_alert
                exit 1
        fi
}


# Alert function
function mail_alert () {
        cat /var/log/replication_check.log | mail -s "Replication check errors!" your@email.com
}


# Check if Slave is running properly
Slave_IO_Running=`mysql -u root --password="XXXXX" -Bse "show slave statusG" | grep Slave_IO_Running | awk '{ print $2 }'`
Slave_SQL_Running=`mysql -u root --password="XXXXX" -Bse "show slave statusG" | grep Slave_SQL_Running | awk '{ print $2 }'`
Last_error=`mysql -u root --password="XXXXX" -Bse "show slave statusG" | grep Last_error | awk -F : '{ print $2 }'`


# If no values are returned, slave is not running
if [ -z $Slave_IO_Running -o -z $Slave_SQL_Running ];
then
        echo "Replication is not configured or you do not have the required access to MySQL"
        exit 1
fi

# If everythings running, remove lockfile if it exists and exit
if [ $Slave_IO_Running == 'Yes' ] && [ $Slave_SQL_Running == 'Yes' ];
then
        rm $lock_file
        echo "Replication slave is running" >> /var/log/replication_check.log 2>&1
        echo "Removed Alert Lock" >> /var/log/replication_check.log 2>&1
elif [ $Slave_SQL_Running == 'No' ] || [ $Slave_IO_Running == 'No' ];
then
        echo "SQL thread not running on server `hostname -s`!" >> /var/log/replication_check.log 2>&1
        echo "Last Error:" $Last_error >> /var/log/replication_check.log 2>&1
        fix_replication
        mail_alert
        rm $lock_file
fi

echo "Script complete!" >> /var/log/replication_check.log 2>&1
exit 0

MySQL Got fatal error 1236原因和解决方法

本文来自:http://blog.itpub.net/22664653/viewspace-1714269/

一 前言
  MySQL 的主从复制作为一项高可用特性,用于将主库的数据同步到从库,在维护主从复制数据库集群的时候,作为专职的MySQL DBA,笔者相信大多数人都会遇到“Got fatal error 1236 from master when reading data from binary log” 这类的报错/报警。本文整理了常见的几种 error 1236 报错,并给出相应的解决方法,有所不足之处,当然也希望各位读者朋友指正。

二 常见的error 1236 报错
2.1 logevent超过max_allowed_packet 大小

  1. Got fatal error 1236 from master when reading data from binary log: ‘log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the start event position from ‘mysql-bin.006730’ at 290066246, the last event was read from ‘/u01/my3309/log/mysql-bin.006730

原因
   此类报错和max_allowed_packet相关。首先max_allowed_packet控制着主从复制过程中,一个语句产生的二进制binlog event大小,它的值必须是1024的倍数 。出现此类错误的常见原因是
 1 该参数在主备库的配置大小不一样,主库的配置值大于从库的配置值。 从主库传递到备库的binlog event大小超过了主库或者备库的max_allowed_packet大小。
 2 主库有大量数据写入时,比如在主库上执行 laod data,insert into …. select 语句,产生大事务。
当主库向从库传递一个比从库的max_allowed_packet 大的packet ,从库接收该packet失败,并报 “log event entry exceeded max_allowed_packet“。
如何解决
 需要确保主备配置一样,然后尝试调大该参数的值。

  1. set global max_allowed_packet =1*1024*1024*1024;
  2. stop slave;
  3. start slave

另外,5.6 版本中的 slave_max_allowed_packet_size 参数控制slave 可以接收的最大的packet 大小,该值通常大于而且可以覆盖 max_allowed_packet 的配置, 进而减少由于上面的问题导致主从复制中断。

2.2 slave 在主库找不到binlog文件

  1. Got fatal error 1236 from master when reading data from binary log:

原因
 该错误发生在从库的io进程从主库拉取日志时,发现主库的mysql_bin.index文件中第一个文件不存在。出现此类报错可能是由于你的slave 由于某种原因停止了好长一段是时间,当你重启slave 复制的时候,在主库上找不到相应的binlog ,会报此类错误。或者是由于某些设置主库上的binlog被删除了,导致从库获取不到对应的binglog file。
如何解决
 1 为了避免数据丢失,需要重新搭建slave 。
 2 注意主库binlog的清理策略,选择基于时间过期的删除方式还是基于空间利用率的删除方式。
  不要使用rm -fr 命令删除binlog file,这样不会同步修改mysql_bin.index 记录的binlog 条目。在删除binlog的时候确保主库保留了从库 show slave status 的Relay_Master_Log_File对应的binlog file。

2.3 主库空间问题,
日志被截断

  1. Got fatal error 1236 from master when reading data from binary log: ‘binlog truncated in the middle of event; consider out of disk space on master; the start event position from ‘mysql-bin.006730’ at 290066434, the last event was read from ‘/u01/my3309/log/mysql-bin.006730

原因
 该错误和主库的空间问题和sync_binlog配置有关,当主库 sync_binlog=N不等于1且磁盘空间满时,MySQL每写N次binary log,系统才会同步到磁盘,但是由于存储日志的磁盘空间满而导致MySQL 没有将日志完全写入磁盘,binlog event被截断。slave 读取该binlog file时就会报错”binlog truncated in the middle of event;”
 当sync_binlog 的默认值是0,像操作系统刷其他文件的机制一样,MySQL不会同步到磁盘中去而是依赖操作系统来刷新binary log。
 当sync_binlog =N (N>0) ,MySQL 在每写 N次 二进制日志binary log时,会使用fdatasync()函数将它的写二进制日志binary log同步到磁盘中去。
如何解决
 在从库重新指向到主库下一个可用的binlog file 并且从binlog file初始化的位置开始

  1. stop slave;
  2. change master to master_log_file=’mysql-bin.006731′, master_log_pos=4;
  3. start slave;

2.4 主库异常断电,从库读取错误的position

  1. 120611 20:39:38 [ERROR] Error reading packet from server: Client requested master to start replication from impossible position ( server_errno=1236)
  2. 120611 20:39:38 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: ‘Client requested master to start replication from impossible position’, Error_code: 1236
  3. 120611 20:39:38 [Note] Slave I/O thread exiting, read up to log ‘mysql-bin.000143’, position 664526789

【原因】
 该问题也是和sync_binlog=N不等于1有关,多出现在主机异常crash ,比如磁盘损坏,raid 卡损坏,或者主机异常掉电导致binlog 未及时同步到磁盘。从库读取了主库binlog file中的不存在的binlog position ,一般比binlogfile 的end position 的值还要大。
如何解决
1 在从库重新指向到主库下一个可用的binlog file 并且从binlog file初始化的位置开始

  1. stop slave;
  2. change master to master_log_file=’mysql-bin.000144′, master_log_pos=4;
  3. start slave;

2 主备库设置 sync_binlog=1,但是设置为1的时候,会带来性能下降。 

三 相关阅读

 MySQL Replication: ‘Got fatal error 1236’ causes and cures

max_allowed_packet 官方介绍
Percona MySQL的特性 max_binlog_files   
sync_binlog innodb_flush_log_at_trx_commit 浅析  

How to clear journalctl

The self maintenance method is to vacuum the logs by size or time.

Retain only the past two days:

journalctl --vacuum-time=2d

Retain only the past 500 MB:

journalctl --vacuum-size=500M

man journalctl for more information.

MySQL dump by query

not mysqldump, but mysql cli…

mysql -e "select * from myTable" -u myuser -pxxxxxxxxx mydatabase

you can redirect it out to a file if you want :

mysql -e "select * from myTable" -u myuser -pxxxxxxxx mydatabase > mydumpfile.txt

Update: Original post asked if he could dump from the database by query. What he asked and what he meant were different. He really wanted to just mysqldump all tables.

mysqldump --tables myTable --where="id < 1000"


mysqldump --databases X --tables Y --where="1 limit 1000000"