MYSQL EXPLAIN解析一 EXTRA中的USING INDEX,USING WHERE,USING INDEX CONDITION

1.简单介绍

using index 和using where只要使用了索引我们基本都能经常看到,而using index condition则是在mysql5.6后新加的新特性,我们先来看看mysql文档对using index condition的描述

附上mysql文档链接:https://dev.mysql.com/doc/refman/5.7/en/index-condition-pushdown-optimization.html

简单来说,mysql开启了ICP的话,可以减少存储引擎访问基表的次数

下面来简单的介绍一下这三者的区别

using index :使用覆盖索引的时候就会出现

using where:在查找使用索引的情况下,需要回表去查询所需的数据

using index condition:查找使用了索引,但是需要回表查询数据

using index & using where:查找使用了索引,但是需要的数据都在索引列中能找到,所以不需要回表查询数据

以上四点就能看出它们之前的区别,或许有部分人都存在疑惑 using index & using where 和using index condition那个比较好,从上面的的解释中就能看出是前者比较好,毕竟不需要回表查询数据,效率上应该比较快的

下面是在stackoverflow中找到的答案:

附上stackoverflow链接:https://stackoverflow.com/questions/28759576/mysql-using-index-condition-vs-using-where-using-index

 

2.测试验证

(1)建立一个userinfo表,其字段信息如下图所示:

 

(2)查询测试

(2.1)测试Using index

覆盖索引,但是没有使用查询条件,所以只有Using index

(2.2)测试Using index & Using where

覆盖索引,但是后面带了查询条件,所以也用了Using where查询索引

(2.3)测试Using index condition

因为本人用的是mysql版本是5.5,所以显示了Using where,如果用5.6以上的版本应该就会显示Using index condition,个人认为,Using index condition其实就是优化了Using where这种情况,有条件的也可以自己写个demo测试一下

以上皆为个人理解,如果有理解错的地方,欢迎指出

Single slave – multiple master MySQL replication

Best way to achieve that would be a real backup solution… but when you do it the way you describe define one slave instance per master – this way you stay flexible, for example if any change is needed you could even move one or more of the slave instances to another machine without any influence on the other slaves/masters…

EDIT – as per comments:

For a description on how to setup multiple instances of MySQL on the same machine see for example

http://dev.mysql.com/doc/refman/5.1/en/multiple-servers.html
http://www.ducea.com/2009/01/19/running-multiple-instances-of-mysql-on-the-same-machine/
Multiple MySQL instances on a single machine
This keeps you even flexible enough to have different MySQL versions in parallel (identical per slave/master combination)…

mysql 分区的真正作用

在mysql 表中 建立分区 很多人都说能够提供查询效率 如果是以主键id分区可以减少查询的范围!
但是也有人说 分区更重要的对数据的管理(清除) 比如保留三个月有效数据 可以将 可以根据时间字段 分区讲三个月前的 数据清除!直接清除分区!想问一下大神 分区到底在哪方面发挥的优势大!应该怎么用?

两种说法都是对的,并且实际上是一回事。数据库分区实际上是提供了数据在物理上的隔离。
你可以这样理解:
你把每天的日记写在了一本无限页数的本子上。有一次,你想回味其中某一篇的日记,你需要打开这个本子逐页查询(顺序遍历);随着日记的增多,某天查的时候你觉得日了狗,为了加快你查找的速度,你终于决定给这个厚厚的本子前面加一个时间的目录(建立索引);然而,好景不长,随着日记数进一步的增长,你发现光是翻目录就要很长时间,这时候你想到可以把你的大本本按月划分拆成不同的小本本(水平分区),并且每个小日记本也有自己的目录(本地索引),这样你的日记就能很科学地管理起来,极大方便了自己的查找。
与此同时你还发现了这么做的另一个好处:某天你突然觉得以前某一个月的自己很傻逼,想把这段时间的日记毁了不让别人看见,那么,只需要把某一个小本本烧掉就行(DROP PARTITION),而不必费劲从一个大本本中去掉那么几十页——这样既不好撕(DELETE效率低)也会影响到查看别的日记(表锁)。

ibdata1文件持续增加的问题定位

Innodb的表有两种存放方式:

第一种共享表空间方式:所有表的索引,数据统一存放在一个共享表空间中,这样会导致共享表空间的空间迅速增长,同时空间回收困难;

第二种独占表空间方式:就是RDS目前采用 的,也就是一张表一个表空间,表中的索引和数据存放在自己独立的表空间中,空间能够比较容易的回收;

无论是独占还是共享表空间,innodb都会有系统共享表空间(ibdata1),该系统表空间主要用于存储数据字典,undo entry,insert buffer,doublewrite buffer,

该系统表空间的增加通常的原因有如下:

a.长时间没有提交事务,同时数据库中有大量的更新,插入,删除 ,导致innodb创建大量的undo来维护一致性读:可以通过show engine innodb status\G查看active的事务:

SHOW ENGINE INNODB STATUS\G

—TRANSACTION 36E, ACTIVE 1256288 sec

MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root

show engine innodb status

Trx read view will not see trx with id >= 36F, sees < 36F

b.mysql 5.1中undo的purge是和master thread 共用一个线程,所以发现show engine inndob status\G中的histtory length过长,则可能的purge的速度到达了瓶颈,

所以在mysql 5.5将undo的purge独立出来,可以设置undo purge的线程个数:

| innodb_purge_threads | 0 |

| innodb_max_purge_lag | 0 |

| innodb_max_purge_size | 0 |

| innodb_purge_batch_size | 20 |

如何查看ibdata1中的文件组建?

开源社区提供了一个工具:innodb_space可以清晰地分析出ibdata1的组成(该工具需要bindata环境)

innodb_space -f /tmp/ibdata1 space-page-type-summary

type                count       percent     description

UNDO_LOG            4430725     80.61       Undo log            

ALLOCATED           1035701     18.84       Freshly allocated

INODE               28348       0.52        File segment inode

INDEX               722         0.01        B+Tree index

IBUF_BITMAP         334         0.01        Insert buffer bitmap

XDES                333         0.01        Extent descriptor

IBUF_FREE_LIST      152         0.00        Insert buffer free list

SYS                 3           0.00        System internal

TRX_SYS             1           0.00        Transaction system header

FSP_HDR             1           0.00        File space header

可以看到ibdata1文件中大量的都是undo_log,在定位到其中的文件组成后,我们可以采取以下方案:

建议用户将版本从5.1升级到5.5,5.5中有独立的purge线程可以很快的回收掉undo log,迁移的过程中由于是采用逻辑迁移,会重建ibdata1文件降低空间使用;

在5.6中可以单独设置undo tablespace文件,避免与ibdata1混用在一起。

MySQL – Convert all Tables of one or more Database(s) from MyISAM to InnoDB

To convert all DBNAME’s InnoDB Tables to MyISAM

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME,' ENGINE=MyISAM;') 
FROM Information_schema.TABLES WHERE TABLE_SCHEMA = 'DBNAME' AND ENGINE = 'InnoDB' AND TABLE_TYPE = 'BASE TABLE'

To convert all DBNAME’s MyISAM Tables to InnoDB

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME,' ENGINE=InnoDB;') 
FROM Information_schema.TABLES WHERE TABLE_SCHEMA = 'DBNAME' AND ENGINE = 'MyISAM' AND TABLE_TYPE = 'BASE TABLE'

To convert all InnoDB Tables to MyISAM (all databases)

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME, ' engine=MyISAM;') 
FROM information_schema.TABLES WHERE ENGINE = 'InnoDB';

To convert all MyISAM Tables to InnoDB (all databases)

SELECT CONCAT('ALTER TABLE ', TABLE_SCHEMA, '.', TABLE_NAME, ' engine=InnoDB;') 
FROM information_schema.TABLES WHERE ENGINE = 'MyISAM';

MySQL any way to import a huge (32 GB) sql dump faster?

You definitely need to change the following

innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 256M
innodb_log_file_size = 1G
innodb_write_io_threads = 16
innodb_flush_log_at_trx_commit = 0

Why these settings ?

innodb_buffer_pool_size will cache frequently read data
innodb_log_buffer_size : Larger buffer reduces write I/O to Transaction Logs
innodb_log_file_size : Larger log file reduces checkpointing and write I/O
innodb_write_io_threads : Service Write Operations to .ibd files. According to MySQL Documentation on Configuring the Number of Background InnoDB I/O Threads, each thread can handle up to 256 pending I/O requests. Default for MySQL is 4, 8 for Percona Server. Max is 64.
innodb_flush_log_at_trx_commit
In the event of a crash, both 0 and 2 can lose once second of data.
The tradeoff is that both 0 and 2 increase write performance.
I choose 0 over 2 because 0 flushes the InnoDB Log Buffer to the Transaction Logs (ib_logfile0, ib_logfile1) once per second, with or without a commit. Setting 2 flushes the InnoDB Log Buffer only on commit. There are other advantages to setting 0 mentioned by @jynus, a former Percona instructor.
Restart mysql like this

service mysql restart --innodb-doublewrite=0

This disables the InnoDB Double Write Buffer

Import your data. When done, restart mysql normally

service mysql restart

This reenables the InnoDB Double Write Buffer

Give it a Try !!!

SIDE NOTE : You should upgrade to 5.6.21 for latest security patches.

Checking and repairing mysql replication automatically

MySQL replication has been known to easily break, as a result of a large multitude of potential causes.

Sometimes the replication can even break if an erroneous query is executed on the master server.

With all the potential issues that may break replication, we thought it prudent to write an automated check script that can run on a scheduled basis (i.e. every 10-15 minutes), check the Slave status, report on any errors if applicable and attempt to repair replication.

We have built this script to exit and send mail alerts if any step of the checking and repairing process fails or generates an error in itself.

The script also generates a lock file to ensure that no more than one check process can run at any given time. We feel this script could be best used for scenarios for remote MySQL slaves, for example. Adding this extra layer may ensure a more reliable replication.

The repair process is simply 3 MySQL Commands :

stop slave;
reset slave;
slave start;

The above directives assume that you have a master.info with the mysql master server information statically set. No CHANGE MASTER commands have to be executed as a result. Resetting the slave clears the error and resumes replication, and all the queries missed during the time it failed should be queued and applied after it starts again.

Here is the script :

#!/bin/sh
# Slave replication auto recovery and alert
# Star Dot Hosting 2012

currentmonth=`date "+%Y-%m-%d"`
lock_file=/tmp/slave_alert.lck

echo "MySQL Replication Check Script" > /var/log/replication_check.log 2>&1
echo "------------------------------" >> /var/log/replication_check.log 2>&1
echo "$currentmonth" >> /var/log/replication_check.log 2>&1
echo "" >> /var/log/replication_check.log 2>&1


# Check if lock file exists
if [ -f $lock_file ];
then
        echo "Lock file exists! Possible conflict!" >> /var/log/replication_check.log 2>&1
        mail_alert
        exit 1
else
        touch $lock_file
fi

# Fix slave
function fix_replication () {
        mysql -u root --password="XXXXX" -Bse "stop slave" >> /var/log/replication_check.log 2>&1
        if [ "$?" -eq 0 ];
        then
                echo "Stop slave succeeded..." >> /var/log/replication_check.log 2>&1
        else
                echo "Slave recover function failed" >> /var/log/replication_check.log 2>&1
                mail_alert
                exit 1
        fi
        mysql -u root --password="XXXXX" -Bse "reset slave" >> /var/log/replication_check.log 2>&1
        if [ "$?" -eq 0 ];
        then
                echo "Reset slave succeeded..." >> /var/log/replication_check.log 2>&1
        else
                echo "Slave recover function failed" >> /var/log/replication_check.log 2>&1
                mail_alert

                exit 1
        fi
        mysql -u root --password="XXXXX" -Bse "slave start" >> /var/log/replication_check.log 2>&1
        if [ "$?" -eq 0 ];
        then
                echo "Slave start succeeded." >> /var/log/replication_check.log 2>&1
        else
                echo "Slave recover function failed" >> /var/log/replication_check.log 2>&1
                mail_alert
                exit 1
        fi
}


# Alert function
function mail_alert () {
        cat /var/log/replication_check.log | mail -s "Replication check errors!" your@email.com
}


# Check if Slave is running properly
Slave_IO_Running=`mysql -u root --password="XXXXX" -Bse "show slave statusG" | grep Slave_IO_Running | awk '{ print $2 }'`
Slave_SQL_Running=`mysql -u root --password="XXXXX" -Bse "show slave statusG" | grep Slave_SQL_Running | awk '{ print $2 }'`
Last_error=`mysql -u root --password="XXXXX" -Bse "show slave statusG" | grep Last_error | awk -F : '{ print $2 }'`


# If no values are returned, slave is not running
if [ -z $Slave_IO_Running -o -z $Slave_SQL_Running ];
then
        echo "Replication is not configured or you do not have the required access to MySQL"
        exit 1
fi

# If everythings running, remove lockfile if it exists and exit
if [ $Slave_IO_Running == 'Yes' ] && [ $Slave_SQL_Running == 'Yes' ];
then
        rm $lock_file
        echo "Replication slave is running" >> /var/log/replication_check.log 2>&1
        echo "Removed Alert Lock" >> /var/log/replication_check.log 2>&1
elif [ $Slave_SQL_Running == 'No' ] || [ $Slave_IO_Running == 'No' ];
then
        echo "SQL thread not running on server `hostname -s`!" >> /var/log/replication_check.log 2>&1
        echo "Last Error:" $Last_error >> /var/log/replication_check.log 2>&1
        fix_replication
        mail_alert
        rm $lock_file
fi

echo "Script complete!" >> /var/log/replication_check.log 2>&1
exit 0

MySQL Got fatal error 1236原因和解决方法

本文来自:http://blog.itpub.net/22664653/viewspace-1714269/

一 前言
  MySQL 的主从复制作为一项高可用特性,用于将主库的数据同步到从库,在维护主从复制数据库集群的时候,作为专职的MySQL DBA,笔者相信大多数人都会遇到“Got fatal error 1236 from master when reading data from binary log” 这类的报错/报警。本文整理了常见的几种 error 1236 报错,并给出相应的解决方法,有所不足之处,当然也希望各位读者朋友指正。

二 常见的error 1236 报错
2.1 logevent超过max_allowed_packet 大小

  1. Got fatal error 1236 from master when reading data from binary log: ‘log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the start event position from ‘mysql-bin.006730’ at 290066246, the last event was read from ‘/u01/my3309/log/mysql-bin.006730

原因
   此类报错和max_allowed_packet相关。首先max_allowed_packet控制着主从复制过程中,一个语句产生的二进制binlog event大小,它的值必须是1024的倍数 。出现此类错误的常见原因是
 1 该参数在主备库的配置大小不一样,主库的配置值大于从库的配置值。 从主库传递到备库的binlog event大小超过了主库或者备库的max_allowed_packet大小。
 2 主库有大量数据写入时,比如在主库上执行 laod data,insert into …. select 语句,产生大事务。
当主库向从库传递一个比从库的max_allowed_packet 大的packet ,从库接收该packet失败,并报 “log event entry exceeded max_allowed_packet“。
如何解决
 需要确保主备配置一样,然后尝试调大该参数的值。

  1. set global max_allowed_packet =1*1024*1024*1024;
  2. stop slave;
  3. start slave

另外,5.6 版本中的 slave_max_allowed_packet_size 参数控制slave 可以接收的最大的packet 大小,该值通常大于而且可以覆盖 max_allowed_packet 的配置, 进而减少由于上面的问题导致主从复制中断。

2.2 slave 在主库找不到binlog文件

  1. Got fatal error 1236 from master when reading data from binary log:

原因
 该错误发生在从库的io进程从主库拉取日志时,发现主库的mysql_bin.index文件中第一个文件不存在。出现此类报错可能是由于你的slave 由于某种原因停止了好长一段是时间,当你重启slave 复制的时候,在主库上找不到相应的binlog ,会报此类错误。或者是由于某些设置主库上的binlog被删除了,导致从库获取不到对应的binglog file。
如何解决
 1 为了避免数据丢失,需要重新搭建slave 。
 2 注意主库binlog的清理策略,选择基于时间过期的删除方式还是基于空间利用率的删除方式。
  不要使用rm -fr 命令删除binlog file,这样不会同步修改mysql_bin.index 记录的binlog 条目。在删除binlog的时候确保主库保留了从库 show slave status 的Relay_Master_Log_File对应的binlog file。

2.3 主库空间问题,
日志被截断

  1. Got fatal error 1236 from master when reading data from binary log: ‘binlog truncated in the middle of event; consider out of disk space on master; the start event position from ‘mysql-bin.006730’ at 290066434, the last event was read from ‘/u01/my3309/log/mysql-bin.006730

原因
 该错误和主库的空间问题和sync_binlog配置有关,当主库 sync_binlog=N不等于1且磁盘空间满时,MySQL每写N次binary log,系统才会同步到磁盘,但是由于存储日志的磁盘空间满而导致MySQL 没有将日志完全写入磁盘,binlog event被截断。slave 读取该binlog file时就会报错”binlog truncated in the middle of event;”
 当sync_binlog 的默认值是0,像操作系统刷其他文件的机制一样,MySQL不会同步到磁盘中去而是依赖操作系统来刷新binary log。
 当sync_binlog =N (N>0) ,MySQL 在每写 N次 二进制日志binary log时,会使用fdatasync()函数将它的写二进制日志binary log同步到磁盘中去。
如何解决
 在从库重新指向到主库下一个可用的binlog file 并且从binlog file初始化的位置开始

  1. stop slave;
  2. change master to master_log_file=’mysql-bin.006731′, master_log_pos=4;
  3. start slave;

2.4 主库异常断电,从库读取错误的position

  1. 120611 20:39:38 [ERROR] Error reading packet from server: Client requested master to start replication from impossible position ( server_errno=1236)
  2. 120611 20:39:38 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: ‘Client requested master to start replication from impossible position’, Error_code: 1236
  3. 120611 20:39:38 [Note] Slave I/O thread exiting, read up to log ‘mysql-bin.000143’, position 664526789

【原因】
 该问题也是和sync_binlog=N不等于1有关,多出现在主机异常crash ,比如磁盘损坏,raid 卡损坏,或者主机异常掉电导致binlog 未及时同步到磁盘。从库读取了主库binlog file中的不存在的binlog position ,一般比binlogfile 的end position 的值还要大。
如何解决
1 在从库重新指向到主库下一个可用的binlog file 并且从binlog file初始化的位置开始

  1. stop slave;
  2. change master to master_log_file=’mysql-bin.000144′, master_log_pos=4;
  3. start slave;

2 主备库设置 sync_binlog=1,但是设置为1的时候,会带来性能下降。 

三 相关阅读

 MySQL Replication: ‘Got fatal error 1236’ causes and cures

max_allowed_packet 官方介绍
Percona MySQL的特性 max_binlog_files   
sync_binlog innodb_flush_log_at_trx_commit 浅析  

Remove duplicate rows in MySQL

A really easy way to do this is to add a UNIQUE index on the 3 columns. When you write the ALTER statement, include the IGNORE keyword. Like so:

ALTER IGNORE TABLE jobs
ADD UNIQUE INDEX idx_name (site_id, title, company);

This will drop all the duplicate rows. As an added benefit, future INSERTs that are duplicates will error out. As always, you may want to take a backup before running something like this…

mysql主从配置

开启mysql只读模式:

FLUSH TABLES WITH READ LOCK;
SET GLOBAL read_only = 1;

查看主数据库状态:
show master status; //可以查看主mysql状态

备份主数据库:

mysqldump -h -ujbbwbu -p jbbwb > jbbwb.sql

导入备份文件到从数据库:

source jbbwb.sql

设置同步点:
change master to
-> master_host=’192.168.78.128′,
-> master_user=’test’,
-> master_password=’MyPass1!’,
-> master_log_file=’mysql-bin.000001′,
-> master_log_pos=1244;

start slave;

show slave status \G;

关闭mysql只读模式:
SET GLOBAL read_only = 0;
UNLOCK TABLES;