Title

MongoDB 主动复制

参考 http://www.lanceyan.com/tech/mongodb/mongodb_cluster_1.html
参考 MongoDB学习札记第六篇之主从复制
参考 高可用的MongoDB集群

在大数据的时代,传统的关系型数据库要能更高的服务必须要解决高并发读写、海量数据高效存储、高可扩展性和高可用性这些难题.不过就是因为这些问题Nosql诞生了.

NOSQL有这些优势:

大数据量,可以通过廉价服务器存储大量的数据,轻松摆脱传统mysql单表存储量级限制.

高扩展性,Nosql去掉了关系数据库的关系型特性,很容易横向扩展,摆脱了以往老是纵向扩展的诟病.

高性能,Nosql通过简单的key-value方式获取数据,非常快速.还有NoSQL的Cache是记录级的,是一种细粒度的Cache,所以NoSQL在这个层面上来说就要性能高很多.

灵活的数据模型,NoSQL无需事先为要存储的数据建立字段,随时可以存储自定义的数据格式.而在关系数据库里,增删字段是一件非常麻烦的事情.如果是非常大数据量的表,增加字段简直就是一个噩梦.

高可用,NoSQL在不太影响性能的情况,就可以方便的实现高可用的架构.比如mongodb通过mongos、mongo分片就可以快速配置出高可用配置.

在nosql数据库里,大部分的查询都是键值对(key、value)的方式.MongoDB是一个介于关系数据库和非关系数据库之间的产品,是非关系数据库当中最像关系数据库的.支持类似于面向对象的查询语言,几乎可以实现类似关系数据库单表查询的绝大部分功能,而且还支持对数据建立索引.所以这个非常方便,我们可以用sql操作MongoDB,从关系型数据库迁移过来,开发人员学习成本会大大减少.如果再对底层的sql API做一层封装,开发基本可以感觉不到mongodb和关系型数据库的区别.同样MongoDB也是号称自己能够快速搭建一个高可用可扩展的的分布式集群,网上有很多搭建的文章,在我们搭建的时候还需要查找修改很多东西,所以把自己实战的步骤记录下来以备忘.我们看看如何一步一步搭建这个东东.

一、mongodb单实例.这种配置只适合简易开发时使用,生产使用不行,因为单节点挂掉整个数据业务全挂,如下图.
虽然不能生产使用,但这个模式可以快速搭建启动,并且能够用mongodb的命令操作数据库.下面列出在linux下安装单节点mongodb的步骤

二、主从模式.使用mysql数据库时大家广泛用到,采用双机备份后主节点挂掉了后从节点可以接替主机继续服务.所以这种模式比单节点的高可用性要好很多.
下面看一下怎么一步步搭建一个mongodb的主从复制节点:
1、准备两台机器 43.241.222.110 和 42.96.194.60. 43.241.222.110 当作主节点, 42.96.194.60作为从节点.
2、分别下载 mongodb 安装程序包.在 43.241.222.110 上建立文件夹 /home/data/mongodb,42.96.194.60 建立文件夹 /home/data/mongodb
两台MongoDB服务器的配置文件均为 /etc/mongod.cnf

storage:
    dbPath: "/data/home/data/mongodb"
    engine: wiredTiger
    wiredTiger:
        engineConfig:
            cacheSizeGB: 20
        collectionConfig:
            blockCompressor: snappy

systemLog:
   destination: file
   path: "/var/log/mongodb/mongod.log"
   logAppend: true

processManagement:
    fork: true

net:
    #bindIp: 127.0.0.1,43.241.222.110,42.96.194.60  不能绑定外网访问ip,局域网的貌似可以,只能在iptable设定允许指定ip访问27017
    #bindIp: 0.0.0.0
    port: 27017
    # Enable the HTTP interface (Defaults to port 28017).
    http:
        enabled: false

3、在43.241.222.110启动mongodb主节点程序.注意后面的这个 “-master”参数,标示主节点.启动主节点

sudo mongod -f /etc/mongod.cnf -master

4、在42.96.194.60启动 mongodb 从节点程序.关键配置,指定主节点ip地址和端口 -source 43.241.222.110:27017 和 标示从节点 -slave 参数.

sudo mongod -f /etc/mongod.cnf -slave -source 43.241.222.110:27017

输出日志如下,成功!

#日志显示从节点 从主节点同步复制数据

[replslave] repl: from host:43.241.222.110:27017

配置完Mongo主从复制后开始测试主从复制.

在主节点上连接到终端:

mongo 127.0.0.1

#建立test 数据库.

use test;

往testdb表插入数据.

> db.testdb.insert({"test1":"testval1"})

查询testdb数据看看是否成功.

> db.testdb.find();
{ "_id" : ObjectId("5284e5cb1f4eb215b2ecc463"), "test1" : "testval1" }

检查从主机的数据.

mongo 127.0.0.1

查看当前数据库.

> show dbs;
  local   0.203125GB
  test    0.203125GB

use test;
db.testdb.find();
{ "_id" : ObjectId("5284e5cb1f4eb215b2ecc463"), "test1" : "testval1" }

查询后数据已经同步过来了.再看看日志,发现从主机确实从主机同步了数据.

查看服务状态

> db.printReplicationInfo();
this is a slave, printing slave replication info.
source:   43.241.222.110:27017
  syncedTo: Sun Nov 17 2013 16:04:02 GMT+0800 (CST) = -54 secs ago (-0.01hrs)

到此主从结构的mongodb搭建好了.

参考 MongoDB学习札记第六篇之主从复制

MongoDB 3.0 复制集升级操作以及坑


问题分析

进入MongoShell执行以下命令

>show dbs

提示以下错误

E QUERY Error: listDatabases failed: 
        { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
at Error (<anonymous>)

查看mongo状态

> rs.status()
{ "ok" : 0, "errmsg" : "not running with --replSet", "code" : 76 }

每次登陆mongoShell后都需要执行以下命令

> rs.slaveOK()
2015-09-24T17:03:16.528+0800 E QUERY    TypeError: Object function () { return "try rs.help()"; } has no method 'slaveOK'    at (shell):1:4

参考 MongoDB 3.0 复制集升级操作以及坑 http://dev.guanghe.tv/2015/05/mongdb3-upgrade-bug.html

//查看状态

> rs.status()
{ "ok" : 0, "errmsg" : "not running with --replSet", "code" : 76 }

查看运行时配置

> use admin
switched to db admin
> db.runCommand("getCmdLineOpts")
{
    "argv" : [
        "mongod",
        "-f",
        "/etc/mongod.cnf",
        "-slave",
        "-source",
        "43.241.222.110:27017"
    ],
    "parsed" : {
        "config" : "/etc/mongod.cnf",
        "net" : {
            "http" : {
                "enabled" : false
            },
            "port" : 27017
        },
        "processManagement" : {
            "fork" : true
        },
        "slave" : true,
        "source" : "43.241.222.110:27017",
        "storage" : {
            "dbPath" : "/data/home/data/mongodb",
            "engine" : "wiredTiger",
            "wiredTiger" : {
                "collectionConfig" : {
                    "blockCompressor" : "snappy"
                },
                "engineConfig" : {
                    "cacheSizeGB" : 20
                }
            }
        },
        "systemLog" : {
            "destination" : "file",
            "logAppend" : true,
            "path" : "/var/log/mongodb/mongod.log"
        }
    },
    "ok" : 1
}

Title

MongoDB 复制集

参考 MongoDB学习札记第八篇之Replica Set 实战
参考 MongoDB 3.0 复制集集群搭建及安全认证实践
参考 MongoDB 3.0 WT引擎参考配置文件
参考 MongoDB 副本集的原理、搭建、应用

准备三台mongo服务器 分别为 43.241.222.110 42.96.194.60 和 45.63.50.67 搭建复制集

如果对于怎么安装Mongodb还不清楚的同学可以查看我之前的学习札记

三台MongoDB服务器的配置文件均为 /etc/mongod.cnf

storage:
    dbPath: "/data/home/data/RSMongoDB"
    engine: wiredTiger
    wiredTiger:
        engineConfig:
            cacheSizeGB: 20
        collectionConfig:
            blockCompressor: snappy

systemLog:
   destination: file
   path: "/var/log/mongodb/mongod.log"
   logAppend: true

processManagement:
    fork: true

net:
    #bindIp: 127.0.0.1,43.241.222.110,42.96.194.60  不能绑定外网访问ip,局域网的貌似可以,只能在iptable设定允许指定ip访问27017
    #bindIp: 0.0.0.0
    port: 27017
    # Enable the HTTP interface (Defaults to port 28017).
    http:
        enabled: false

第一步:

在三台机器上分别运行(都要运行)

root@ubuntu:/usr/local/mongodb#mongod -f /etc/mongod.cnf --replSet CoamReSet

注意这里的 —replSet 参数指定了副本集的名称,每一个副本集都有一个唯一的名称.

等到三台机器都启动完了之后.使用mongo客户端登录其中一台mongod服务器.这里我登录到 43.241.222.110 这台机器

root@ubuntu:~# mongo

登录之后要切换到admin数据库,这样我们可以进行副本集的配置,具体怎么配置,代码如下:

> use admin
switched to db admin

定义副本集配置变量,这里的 _id:”repset” 和上面命令参数 “ –replSet repset” 要保持一样.

> config = {_id:"CoamReSet",members:[
         {_id:0,host:"43.241.222.110:27017","priority":1},
         {_id:1,host:"42.96.194.60:27017","priority":1},
         {_id:2,host:"45.63.50.67:27017","priority":0}]}

输出

{
    "_id" : "CoamReSet",
    "members" : [
        {
            "_id" : 0,
            "host" : "43.241.222.110:27017"
        },
        {
            "_id" : 1,
            "host" : "42.96.194.60:27017"
        },
        {
            "_id" : 2,
            "host" : "45.63.50.67:27017"
        }
    ]
}

初始化副本集配置

> rs.initiate(config);

输出成功

{ "ok" : 1 }

注意:新创建副本集必须要数据库没有数据,否则初始化会提示以下错误:

{
    "ok" : 0,
    "errmsg" : "'42.96.194.60:27017' has data already, cannot initiate set.",
    "code" : 110
}

先定义 config 的配置信息, 然后通过 rs.initiate(config) 方法,将配置信息初始化.这两个步骤完成之后就表示我们的副本集配置信息初始化完成了,在这个CoamReSet的副本集中我们定义了三台主机(注意在定义配置信息的时候指定的 _id 必须和我们启动mongod的时候指定的参数 —replSet 这个参数的值是一样的.)

过一会,mongodb就会帮我们选举出Primary节点和Secondary节点了.那在mongo客户端,我们可以通过 rs.status() 来查看副本集的状态信息

CoamReSet:SECONDARY> rs.status()
{
    "set" : "CoamReSet",
    "date" : ISODate("2015-10-01T09:45:34.924Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 0,
            "name" : "43.241.222.110:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 167,
            "optime" : Timestamp(1443692712, 1),
            "optimeDate" : ISODate("2015-10-01T09:45:12Z"),
            "electionTime" : Timestamp(1443692715, 1),
            "electionDate" : ISODate("2015-10-01T09:45:15Z"),
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "42.96.194.60:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 21,
            "optime" : Timestamp(1443692712, 1),
            "optimeDate" : ISODate("2015-10-01T09:45:12Z"),
            "lastHeartbeat" : ISODate("2015-10-01T09:45:33.166Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T09:45:33.564Z"),
            "pingMs" : 16,
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "45.63.50.67:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 21,
            "optime" : Timestamp(1443692712, 1),
            "optimeDate" : ISODate("2015-10-01T09:45:12Z"),
            "lastHeartbeat" : ISODate("2015-10-01T09:45:32.949Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T09:45:33.973Z"),
            "pingMs" : 183,
            "configVersion" : 1
        }
    ],
    "ok" : 1
}

其中name表示我么你的主机, health表示主机是否健康(0/1) , state(主节点(1)还是从节点(2),或者是不可达节点)

如果上面信息正常显示出来说明整个副本集群已经建立起来了.这时候我们来验证一下是否是真的能够自动备份数据,是否能够自动从失败中恢复,自动选举新的Primary节点.
这个实验我们这样来做:

先往 Primary 节点(43.241.222.110)插入数据,在 42.96.194.60 和 45.63.50.67 两台 Secondary 节点中查询数据,验证是否能够正常的同步机器.

先往 Primary 节点(43.241.222.110)插入数据

CoamReSet:PRIMARY> use test
switched to db test
CoamReSet:PRIMARY> show collections
CoamReSet:PRIMARY> db.guids.insert({"name":"replica set","author":"webinglin"})
WriteResult({ "nInserted" : 1 })
CoamReSet:PRIMARY> exit
bye

在 45.63.50.67 Secondary 节点中查询数据,验证是否能够正常的同步机器.

root@ubuntu:~# mongo --host 45.63.50.67
MongoDB shell version: 3.0.3
connecting to: 45.63.50.67:27017/test
Server has startup warnings:
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not re                                      commended.
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
CoamReSet:SECONDARY> show dbs
2015-06-09T17:13:49.138-0700 E QUERY    Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
    at Error (<anonymous>)
    at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
    at shellHelper.show (src/mongo/shell/utils.js:630:33)
    at shellHelper (src/mongo/shell/utils.js:524:36)
    at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
CoamReSet:SECONDARY> use test
switched to db test
CoamReSet:SECONDARY> db.guids.find()
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
CoamReSet:SECONDARY> rs.slaveOk()
CoamReSet:SECONDARY> rs.slaveOk()
CoamReSet:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
CoamReSet:SECONDARY> show collections()
2015-06-09T17:14:24.219-0700 E QUERY    Error: don't know how to show [collections()]
    at Error (<anonymous>)
    at shellHelper.show (src/mongo/shell/utils.js:733:11)
    at shellHelper (src/mongo/shell/utils.js:524:36)
    at (shellhelp2):1:1 at src/mongo/shell/utils.js:733
CoamReSet:SECONDARY> show collections
guids
system.indexes
CoamReSet:SECONDARY> exit
bye

在 42.96.194.60 Secondary 节点中查询数据,验证是否能够正常的同步机器.

root@ubuntu:~# mongo --host 42.96.194.60
MongoDB shell version: 3.0.3
connecting to: 42.96.194.60:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not re                                      commended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
CoamReSet:SECONDARY> rs.slaveOk()
CoamReSet:SECONDARY> show dbs
local  1.078GB
test   0.078GB
CoamReSet:SECONDARY> use test
switched to db test
CoamReSet:SECONDARY> show collections
guids
system.indexes
CoamReSet:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
CoamReSet:SECONDARY> exit
bye

至此,整个验证过程说明了我们集群部署是成功的.数据能够正常同步了.
那么接下来我们还要验证另一种情况,Primary异常终止之后(43.241.222.110),另外两个Secondary节点会不会自动选举出新的Primary节点呢?
这个实验我们这样处理: 将 43.241.222.110 机器的mongod服务停止掉,然后再来连接 42.96.194.60 或者 45.63.50.67 任意一台机器,通过 rs.status() 查看集群状态.

在Primary节点(43.241.222.110)通过 ps -e | grep mongod 查看mongod服务是否开启,然后通过 killall mongod 或者 kill -15 <进程号> 来杀死mongod进程

root@ubuntu:~# ps -e | grep mongod
 3279 pts/0    00:00:19 mongod
root@ubuntu:~# killall mongod

登陆从节点 42.96.194.60 ,查看mongo服务器状态

root@ubuntu:~# mongo --host 42.96.194.60
MongoDB shell version: 3.0.3
connecting to: 42.96.194.60:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
CoamReSet:SECONDARY> rs.status()
{
    "set" : "CoamReSet",
    "date" : ISODate("2015-10-01T10:09:57.981Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 0,
            "name" : "43.241.222.110:27017",
            "health" : 0,
            "state" : 8,
            "stateStr" : "(not reachable/healthy)",
            "uptime" : 0,
            "optime" : Timestamp(0, 0),
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2015-10-01T10:09:56.226Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T10:07:54.850Z"),
            "pingMs" : 18,
            "lastHeartbeatMessage" : "Failed attempt to connect to 43.241.222.110:27017; couldn't connect to server 43.241.222.110:27017 (43.241.222.110), connection attempt failed",            "configVersion" : -1
        },
        {
            "_id" : 1,
            "name" : "42.96.194.60:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 1577,
            "optime" : Timestamp(1443693679, 1),
            "optimeDate" : ISODate("2015-10-01T10:01:19Z"),
            "electionTime" : Timestamp(1443694079, 1),
            "electionDate" : ISODate("2015-10-01T10:07:59Z"),
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "45.63.50.67:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 1481,
            "optime" : Timestamp(1443693679, 1),
            "optimeDate" : ISODate("2015-10-01T10:01:19Z"),
            "lastHeartbeat" : ISODate("2015-10-01T10:09:56.891Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T10:09:57.259Z"),
            "pingMs" : 440,
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
CoamReSet:SECONDARY> exit
bye

通过上面这段代码的观察,我们发现,当把原来的Primary节点停止掉后(43.241.222.110停止), 那么整个mongodb的副本集群会重新选举出新的Primary节点( 42.96.194.60 机器)

为了验证一下新选举的Primary是否正常,我们再次验证一把数据的同步情况,先连接到 42.96.194.60 主节点,将原来的数据删掉,再到 45.63.50.67 SECONDARY 进行验证,数据是否也被删除

连接到 42.96.194.60 主节点,将原来的数据删掉

root@ubuntu:~# mongo --host 42.96.194.60
CoamReSet:PRIMARY> use test
switched to db test
CoamReSet:PRIMARY> show collections
guids
CoamReSet:PRIMARY> db.guids.find()
{ "_id" : ObjectId("560d03099548a3e5bb8b03bf"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("560d046f9548a3e5bb8b03c0"), "name" : "zhang", "author" : "yafei..." }
CoamReSet:PRIMARY> db.guids.remove({name:"zhang"})
WriteResult({ "nRemoved" : 1 })
CoamReSet:PRIMARY> db.guids.find()
{ "_id" : ObjectId("560d03099548a3e5bb8b03bf"), "name" : "replica set", "author" : "webinglin" }
CoamReSet:PRIMARY> exit
bye

再到 45.63.50.67 SECONDARY 进行验证,数据是否也被删除

root@ubuntu:~# mongo --host 45.63.50.67
CoamReSet:SECONDARY> rs.slaveOk()
CoamReSet:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
CoamReSet:SECONDARY> exit
bye

实践后发现,先选举的Primary节点也正常工作.我们的整个Mongodb副本集群测试完成.

动态添加节点,删除节点.

在开始这个实验之前,先把 43.241.222.110 的机器重新启动,然后用mongo客户端连到 43.241.222.110 进行验证数据是否也同步了.
登录 43.241.222.110 之后,我们发现数据也同步了,然后43.241.222.110节点变成了 Secondary 节点了.

root@ubuntu:~# mongo
CoamReSet:SECONDARY> rs.slaveOk()
CoamReSet:SECONDARY> show dbs
local   0.000GB
syData  0.000GB
test    0.000GB
CoamReSet:SECONDARY> use test
switched to db test
CoamReSet:SECONDARY> show collections
guids
CoamReSet:SECONDARY> db.guids.find()
{ "_id" : ObjectId("560d03099548a3e5bb8b03bf"), "name" : "replica set", "author" : "webinglin" }
CoamReSet:SECONDARY> rs.status()
{
    "set" : "CoamReSet",
    "date" : ISODate("2015-10-01T10:21:53.366Z"),
    "myState" : 2,
    "syncingTo" : "42.96.194.60:27017",
    "members" : [
        {
            "_id" : 0,
            "name" : "43.241.222.110:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 68,
            "optime" : Timestamp(1443694583, 1),
            "optimeDate" : ISODate("2015-10-01T10:16:23Z"),
            "syncingTo" : "42.96.194.60:27017",
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "42.96.194.60:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 66,
            "optime" : Timestamp(1443694583, 1),
            "optimeDate" : ISODate("2015-10-01T10:16:23Z"),
            "lastHeartbeat" : ISODate("2015-10-01T10:21:53.026Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T10:21:52.588Z"),
            "pingMs" : 16,
            "electionTime" : Timestamp(1443694079, 1),
            "electionDate" : ISODate("2015-10-01T10:07:59Z"),
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "45.63.50.67:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 66,
            "optime" : Timestamp(1443694583, 1),
            "optimeDate" : ISODate("2015-10-01T10:16:23Z"),
            "lastHeartbeat" : ISODate("2015-10-01T10:21:51.818Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T10:21:53.006Z"),
            "pingMs" : 311,
            "syncingTo" : "42.96.194.60:27017",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
CoamReSet:SECONDARY> exit
bye

登录到 42.96.194.60 Primary 节点,通过 rs.remove() 方法来删除副本集中的某一个节点,这里我们还是将 43.241.222.110 删除.删除之后我们还往 42.96.194.60 主节点中加入数据.

CoamReSet:PRIMARY> rs.remove("43.241.222.110:27017")
{ "ok" : 1 }
CoamReSet:PRIMARY> rs.status()
{
    "set" : "CoamReSet",
    "date" : ISODate("2015-10-01T10:24:35.061Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "42.96.194.60:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 2455,
            "optime" : Timestamp(1443695065, 1),
            "optimeDate" : ISODate("2015-10-01T10:24:25Z"),
            "electionTime" : Timestamp(1443694079, 1),
            "electionDate" : ISODate("2015-10-01T10:07:59Z"),
            "configVersion" : 2,
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "45.63.50.67:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2358,
            "optime" : Timestamp(1443695065, 1),
            "optimeDate" : ISODate("2015-10-01T10:24:25Z"),
            "lastHeartbeat" : ISODate("2015-10-01T10:24:33.256Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T10:24:34.102Z"),
            "pingMs" : 334,
            "configVersion" : 2
        }
    ],
    "ok" : 1
}
CoamReSet:PRIMARY> db.guids.find()
{ "_id" : ObjectId("560d03099548a3e5bb8b03bf"), "name" : "replica set", "author" : "webinglin" }
CoamReSet:PRIMARY> db.guids.insert({"name":"remove one node dync"})
WriteResult({ "nInserted" : 1 })
CoamReSet:PRIMARY> db.guids.find()
{ "_id" : ObjectId("560d03099548a3e5bb8b03bf"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("560d0a1d7d9deef6a0890b1f"), "name" : "remove one node dync" }
CoamReSet:PRIMARY> exit
bye

删除 43.241.222.110 节点后,我们往 Primary 节点中加入了新的数据,然后先不要将 43.241.222.110 的 mongod 服务停掉,我们通过mongo连接到 43.241.222.110 的mongod服务来查看数据

root@ubuntu:~# mongo --host 43.241.222.110
> db.guids.find()
{ "_id" : ObjectId("560d03099548a3e5bb8b03bf"), "name" : "replica set", "author" : "webinglin" }
> exit
bye

实验结果可以知道,我们在 45.63.50.67 新加入的数据 {name:”remove one node dync”} 并没有同步到43.241.222.110(已从副本集中删除).

为了让实验结果更加确切,我们查看 43.241.222.110 是否有同步了数据:

root@ubuntu:~# mongo –host 45.63.50.67
CoamReSet:SECONDARY> db.guids.find()
{ “_id” : ObjectId(“557780ebd147e9391020860d”), “name” : “replica set”, “author” : “webinglin” }
{ “_id” : ObjectId(“557785bcbb56172c8e069341”), “name” : “remove one node dync” }
CoamReSet:SECONDARY> exit
bye

实验数据可以看到, 45.63.50.67 同步了在 42.96.194.60 主节点中新增的文档 {“name”:”remove one node dync”},这样就证明了动态删除副本集中的某一个节点的实验成功了.那怎么动态添加节点到副本集中呢?

原理是一样的,但是调用的方法变成了 rs.add(“43.241.222.110:27017”)

CoamReSet:PRIMARY> rs.add("43.241.222.110:27017");
{ "ok" : 1 }
CoamReSet:PRIMARY> rs.status()
{
    "set" : "CoamReSet",
    "date" : ISODate("2015-10-01T10:32:45.284Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "42.96.194.60:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 2945,
            "optime" : Timestamp(1443695560, 1),
            "optimeDate" : ISODate("2015-10-01T10:32:40Z"),
            "electionTime" : Timestamp(1443694079, 1),
            "electionDate" : ISODate("2015-10-01T10:07:59Z"),
            "configVersion" : 3,
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "45.63.50.67:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2849,
            "optime" : Timestamp(1443695560, 1),
            "optimeDate" : ISODate("2015-10-01T10:32:40Z"),
            "lastHeartbeat" : ISODate("2015-10-01T10:32:43.660Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T10:32:44.079Z"),
            "pingMs" : 452,
            "syncingTo" : "42.96.194.60:27017",
            "configVersion" : 2
        },
        {
            "_id" : 3,
            "name" : "43.241.222.110:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2,
            "optime" : Timestamp(1443695560, 1),
            "optimeDate" : ISODate("2015-10-01T10:32:40Z"),
            "lastHeartbeat" : ISODate("2015-10-01T10:32:44.739Z"),
            "lastHeartbeatRecv" : ISODate("2015-10-01T10:32:43.300Z"),
            "pingMs" : 16,
            "syncingTo" : "42.96.194.60:27017",
            "configVersion" : 3
        }
    ],
    "ok" : 1
}
CoamReSet:PRIMARY> exit
bye

在rs.status()返回的结果中可以看到,43.241.222.110 节点已经成功加入副本集中了.加入之后,理论上应该会把在 43.241.222.110 主节点加入的数据同步过来,刚才删除之后是不会同步的.那这时候重新加入副本集,应该是要同步的.下面是实验结果:

root@ubuntu:~# mongo
CoamReSet:SECONDARY> db.guids.find()
{ "_id" : ObjectId("560d03099548a3e5bb8b03bf"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("560d0a1d7d9deef6a0890b1f"), "name" : "remove one node dync" }
CoamReSet:SECONDARY> exit
bye

实验结果显示,动态添加操作也正常.动态的将 43.241.222.110 节点加入到副本集中能够保证数据同步成功.


注意

在调用 rs.add(“host:ip”) 或者 rs.remove(“host:ip”) 的时候,必须要在 Primary 节点中进行.

add方法可以加入一个document对象,这样就可以在指定具体的Secondary节点的更多的设置项了,比如指定为priority: 0 或 priority: 0,hidden: true 或 priority:0,hidden:true,arbiterOnly:true

{
  _id: <int>,
  host: <string>,
  arbiterOnly: <boolean>,
  buildIndexes: <boolean>,
  hidden: <boolean>,
  priority: <number>,
  tags: <document>,
  slaveDelay: <int>,
  votes: <number>
}

怎么对副本集进行权限验证,参考主从复制的安全部分,也是通过openssl来生成keyfile,然后再启动mongod的时候指定keyFile来设置安全的


也可以在三个主机的复制集添加以下配置

replication:
##oplog大小
 oplogSizeMB: 20
##复制集名称
 replSetName: CoamReSet

复制集优先级问题

默认设定节点 43.241.222.110 为Primary主节点,由于初始配置中没有设定,于是在主节点服务器重启后 43.241.222.110 自动切换成 从节点,而 42.96.194.60 切换成了主节点导致应用程序连接错误
于是在当前主节点 43.241.222.110 通过命令行手动切换主节点到 43.241.222.110

CoamReSet:PRIMARY> rs.status()
{
        "set" : "CoamReSet",
        "date" : ISODate("2015-10-09T11:28:28.585Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "42.96.194.60:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 614570,
                        "optime" : Timestamp(1444378212, 1),
                        "optimeDate" : ISODate("2015-10-09T08:10:12Z"),
                        "electionTime" : Timestamp(1444378334, 1),
                        "electionDate" : ISODate("2015-10-09T08:12:14Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "45.63.50.67:27017",
                        "health" : 0,
                        "state" : 8,
                        "stateStr" : "(not reachable/healthy)",
                        "uptime" : 0,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-10-09T11:28:27.754Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-06T08:52:51.239Z"),
                        "pingMs" : 406,
                        "lastHeartbeatMessage" : "Failed attempt to connect to 45.63.50.67:27017; couldn't connect to server 45.63.50.67:27017 (45.63.50.67), connection attempt failed",
                        "configVersion" : -1
                },
                {
                        "_id" : 3,
                        "name" : "43.241.222.110:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 90994,
                        "optime" : Timestamp(1444378212, 1),
                        "optimeDate" : ISODate("2015-10-09T08:10:12Z"),
                        "lastHeartbeat" : ISODate("2015-10-09T11:28:27.532Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-09T11:28:27.299Z"),
                        "pingMs" : 15,
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}

CoamReSet:PRIMARY> config = rs.conf()
{
        "_id" : "CoamReSet",
        "version" : 3,
        "members" : [
                {
                        "_id" : 1,
                        "host" : "42.96.194.60:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "45.63.50.67:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "43.241.222.110:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}

设置第一个节点 42.96.194.60 优先级为1,第二个节点 45.63.50.67 优先级为0,第三个节点 43.241.222.110 优先级为2;

CoamReSet:PRIMARY> config.members[0].priority = 1
1
CoamReSet:PRIMARY> config.members[1].priority = 0
0
CoamReSet:PRIMARY> config.members[2].priority = 2
2
CoamReSet:PRIMARY> rs.reconfig(config)
{ "ok" : 1 }
CoamReSet:PRIMARY> rs.status()
{
        "set" : "CoamReSet",
        "date" : ISODate("2015-10-09T12:03:37.679Z"),
        "myState" : 2,
        "syncingTo" : "43.241.222.110:27017",
        "members" : [
                {
                        "_id" : 1,
                        "name" : "42.96.194.60:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 616679,
                        "optime" : Timestamp(1444392119, 1),
                        "optimeDate" : ISODate("2015-10-09T12:01:59Z"),
                        "syncingTo" : "43.241.222.110:27017",
                        "configVersion" : 5,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "45.63.50.67:27017",
                        "health" : 0,
                        "state" : 8,
                        "stateStr" : "(not reachable/healthy)",
                        "uptime" : 0,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-10-09T12:03:35.875Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-06T08:52:51.239Z"),
                        "pingMs" : 406,
                        "lastHeartbeatMessage" : "Failed attempt to connect to 45.63.50.67:27017; couldn't connect to server 45.63.50.67:27017 (45.63.50.67), connection attempt failed",
                        "configVersion" : -1
                },
                {
                        "_id" : 3,
                        "name" : "43.241.222.110:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 93103,
                        "optime" : Timestamp(1444392119, 1),
                        "optimeDate" : ISODate("2015-10-09T12:01:59Z"),
                        "lastHeartbeat" : ISODate("2015-10-09T12:03:35.817Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-09T12:03:36.764Z"),
                        "pingMs" : 17,
                        "electionTime" : Timestamp(1444390463, 1),
                        "electionDate" : ISODate("2015-10-09T11:34:23Z"),
                        "configVersion" : 5
                }
        ],
        "ok" : 1
}

可以看到 43.241.222.110 已经成功切换成主节点了
关闭主节点 43.241.222.110 后,发现两个从节点没有自动切换成主节点,相当于关闭了 MongoDB 复制集功能
MongoDB RS优先级设置
http://docs.mongodb.org/manual/tutorial/force-member-to-be-primary/

Title

MongoDB 管理笔记摘要

MongoDB 修改重命名字段名

通过命令行批量修改

修改一级字段名

CoamReSet:PRIMARY> db.TruckRoadsPath.update({},{$rename:{"Lng":"lng"}},{multi:true})
WriteResult({ "nMatched" : 2702582, "nUpserted" : 0, "nModified" : 2702582 })

修改二级字段名

CoamReSet:PRIMARY> db.TruckRoadsPath.update({},{$rename:{“PathInfo.Speed”:”PathInfo.speed”}},{multi:true})

出现以下错误,提示不能更新 PathInfo 为 null 的数据,提示更新失败,但检查数据发现已全部正确更新过来了

WriteResult({
        "nMatched" : 0,
        "nUpserted" : 0,
        "nModified" : 0,
        "writeError" : {
                "code" : 16837,
                "errmsg" : "cannot use the part (PathInfo of PathInfo.Speed) to traverse the element ({PathInfo: null})"
        }
})

正确的检查非 null 条件下的写法应该是:

CoamReSet:PRIMARY> db.TruckRoadsPath.update({"PathInfo":{$ne:null}},{$rename:{"PathInfo.State":"PathInfo.state"}},{multi:true})
WriteResult({ "nMatched" : 2702158, "nUpserted" : 0, "nModified" : 2702156 })

参考 How do you query this in Mongo? (is not null)


参考
如何在MongoDB 修改字段名称,重命名字段
Mongo docs - $rename

RebarNote

使用 Rebar 管理 erlang 项目

GitHub -

使用 Intellij idea15 利用 rebar 开发调试 ejabberd

默认 Erlang 的 ejabberd 根项目下已经有 rebar 文件了

1.Win下安装 otp.18 到 G:\Program Files\erl7.2.1 下
2.将 G:\Program Files\erl7.2.1 添加系统环境变量 PATH
3.Intellij idea 项目 ejabberd 中 SDK 添加 OTP18.**
4.win下使用 Cygwin 安装 rebar

$ git clone git://github.com/rebar/rebar.git
$ cd rebar
$ ./bootstrap
Recompile: src/getopt
...
Recompile: src/rebar_utils
==> rebar (compile)
Congratulations! You now have a self-contained script called "rebar" in
your current working directory. Place this script anywhere in your path
and you can use rebar to build OTP-compliant apps.

它将自动将 rebar 安装到 C:\Users\yafei\rebar 下

  1. File->Other Setting->Default Setting->Erlang Extenal Tools 配置 PATH 为 C:\Users\yafei\rebar\rebar.cmd

  2. File->Setting->Setting->In “Build, Execution, Deployment/Erlan Compiler”, 勾选 “Compile project with rebar”.

6.Menu -> Run -> Edit Configurations -> 添加按钮 -> Erlang Rebar ,具体配置为 在 Command 一栏添加 任务 delete-deps clean get-deps compile 并任意填写一个任务名称 coamerlangtask

7.在 Intellij idea 右上角可以点击运行开始打包


错误分析

rebar unable to get dependency from github

使用简单的 clean compile 命令会出现很多未找到依赖包 * ,因为编译 compile 前先要 get-deps 所以将 rebar 编译命令改为 delete-deps clean get-deps compile

参考 rebar unable to get dependency from github


参考列表
Rebar commands
Using Intellij IDEA to write and debug Erlang code
Intellij Idea 官网说明
windows下使用IntelliJ IDEA的erlang编译环境搭建(含rebar工具)

MnesiaNote

Erlang Mnesia Database

Erlang Mnesia Doc

最近学习erlang,在erlang中天然集成了一个分布式数据库名字叫mnesia,因为数据库天然就和erlang的元组,记录无缝结合,并且具有良好的分布式特性,能够自动地在几个erlang节点上实现数据复制,效果有点像zookeeper的节点集群,
而且mnesia还支持数据库事务特性,所以mnesia在erlang应用中使用得非常频繁.

要在本地创建一个mnesia数据库,并且要实现插入,查询功能需要实现以下这几步:
1.启动erlang节点:

#erl -mnesia dir '"d://tmp//mnesia"' -sname mynode
erl -mnesia dir '"C:\Users\yafei\DevWork\Erlang\dev\mnesia\data"' -name win@27.16.194.65 -setcookie 456789
sudo erl -mnesia dir '"/data/home/data/mnesia/"' -name ali@42.96.194.60 -setcookie 456789

注意路径双引号
以上这个 “/data/home/data/mnesia/“ 目录是存放mnesia数据库的schema数据的,当然这个目录也可以不指定,那么mnesia数据库启动之后将把数据库的schema中的数据放在内存中,如果下次节点重启之后,之前数据库中创建的表结构数据就消失了.
2.接下来就是在erl shell中本地初始化一个空的 mnesia 数据库

mnesia:create_schema([node()]).

从参数上是一个节点列表,所以其实需要的话,可以在参数列表上设置多个erlang节点,就能在多个erlang节点上同时创建mnesia数据库并构成 Mnesia 集群.
3.启动mnesia数据库
初始化完成了mnesia数据库schema之后,接下来就可以启动mneisa数据库了.

mnesia:start().

至此,一个空的mneisa已经创建启动成功了,我们可以调用mnesia:info(). 查看当前mnesia数据库的一些状态,结果如下:

(ali@localhost)2> mnesia:info().
===> System info in version "4.13.2", debug level = none <===
opt_disc. Directory "/data/home/data/mnesia" is used.
use fallback at restart = true
running db nodes   = []
stopped db nodes   = [ali@localhost]
ok
(ali@localhost)3> mnesia:start().
ok
(ali@localhost)4> mnesia:info().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Participant transactions <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
schema         : with 1        records occupying 408      words of mem
===> System info in version "4.13.2", debug level = none <===
opt_disc. Directory "/data/home/data/mnesia" is used.
use fallback at restart = false
running db nodes   = [ali@localhost]
stopped db nodes   = []
master node tables = []
remote             = []
ram_copies         = []
disc_copies        = [schema]
disc_only_copies   = []
[{ali@localhost,disc_copies}] = [schema]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
ok

从 [{ali@localhost,disc_copies}] = [schema] 这行上来看当前数据库中只有一个数据库的schema表.


Mnesia 集群

分别在 阿里云 ali 和 本机 win 启动两个节点

sudo -mnesia dir '"/data/home/data/mnesia/"' -name ali@42.96.194.60 -setcookie 456789
erl -mnesia dir '"C:\Users\yafei\DevWork\Erlang\dev\mnesia\data"' -name win@27.16.194.65 -setcookie 456789

保证两个节点都启动,并且 mnesia 没有启用

mnesia:delete_schema(['win@27.16.194.65', 'ali@42.96.194.60']).
mnesia:create_schema(['win@27.16.194.65', 'ali@42.96.194.60']).

然后启动两个节点的 mnesia,查看运行详情

(win@27.16.194.65)9> mnesia:start().
(win@27.16.194.65)9> mnesia:info().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Participant transactions <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
schema         : with 1        records occupying 414      words of mem
===> System info in version "4.13.2", debug level = none <===
opt_disc. Directory [99,58,47,85,115,101,114,115,47,121,97,102,101,105,47,68,
101,118,87,111,114,107,47,69,114,108,97,110,103,47,100,
101,118,47,109,110,101,115,105,97,47,85,115,101,114,115,
121,97,102,101,105,68,101,118,87,111,114,107,69,114,108,
97,110,103,127,101,118,109,110,101,115,105,97,127,97,116,
97] is used.
use fallback at restart = false
running db nodes   = ['ali@42.96.194.60','win@27.16.194.65']
stopped db nodes   = []
master node tables = []
remote             = []
ram_copies         = []
disc_copies        = [schema]
disc_only_copies   = []
[{'ali@42.96.194.60',disc_copies},{'win@27.16.194.65',disc_copies}] = [schema]
3 transactions committed, 0 aborted, 1 restarted, 2 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
ok

根据 running db nodes = [‘ali@42.96.194.60’,’win@27.16.194.65’] 可以看出两个节点已成功组成 节点

在任何一个节点添加数据表


erlang mnesia 数据库实现SQL查询

Mnesia是一个分布式数据库管理系统,适合于电信和其它需要持续运行和具备软实时特性的Erlang应用,越来越受关注和使用,但是目前Mnesia资料却不多,很多都只有官方的用户指南.
下面的内容将着重说明 Mnesia 数据库如何实现SQL查询,实现select / insert / update / where / order by / join / limit / delete等SQL操作.

参见示例 sql.erl

注:使用qlc模块查询,需要在文件顶部声明”-include_lib(“stdlib/include/qlc.hrl”).”,否则编译时会产生”Warning: qlc:q/1 called, but “qlc.hrl” not included”的警告.


使用单独的脚本连接 ejabberd mnesia 数据库

Cookie = 'YOUR_EJABBERD_COOKIE'. % mine was found in /var/lib/ejabberd/.erlang.cookie
EjabberdNode = 'ejabberd@127.0.0.1'. % or whatever you set as EJABBERD_NODE
erlang:set_cookie(EjabberdNode, Cookie).
net_adm:ping(EjabberdNode).
rpc:call(EjabberdNode, mnesia, system_info, [tables]).

参考 How can I connect to ejabberd’s Mnesia database with a separate script?


mnesia表数据Id自增长

1.使用函数:

mnesia:dirty_update_counter(unique_id, mine_result, 1).

2.该函数参数示意,
unique_id 表名, mine_result 表名,每次自增N.
3.
1.创建如下结构的mnesia数据库表 -record(unique_id, {item, uid}),该表是为了存储 mine_result 表的最大的Id值;
2.每为 mine_result 表加入一条新记录时,需要得到新的id值:

mnesia:dirty_update_counter(unique_id,  mine_result , 1),

从unique_id表中取出最大的Id值加1,更新unique_id表的数据,返回最大Id值加1,
注意:请在建立unique_id表的时候,往里面插入数据;表名,初始值,{ mine_result ,0},
我做了一个实验如果unique_id表中没有数据的时候,并发向mine_result库中插入100条数据时,不稳定.
但是unique_id表中加入一个数据后,并发数提高到30000,还是很稳定.


参考
wudaijun’s blog - Erlang mnesia
创设Mnesia数据库
Erlang的Mnesia——为高伸缩性应用准备的数据库管理系统
[Erlang-0008][OTP] 高效指南 – 表和数据库(ets mnesia)
Erlang基础Mnesia 之应用场景
erlang Mnesia - 尚待研究

mnesia数据库使用体验
mnesia数据库优化