8g内存多少mb961mb多少为41.95mb

求问大神8g内存卡等于多少mb3个回答散淡之人88g内存卡等于多少1GB=1024MB, 8GB=2MB 硬件制造商计算地方是是的如下:1GB=1000MB, 8GB=0MB理论上是8×1024,但一般厂家的算法是8×1000,内存卡本身还占用一部分空间,所以实际可使用的容量是7G多
贾靖柔厂商计算容量的方法和计算机的不同,厂商计算的方法是1000KB=1M,而计算机计算的方法是1024KB=1M.所以按照计算机标准。8g=bit,由于1G=1024M,1M=1024K,1k=*
卡实际容量为:bit/24=7.45G
所以8G内卡在计算机中实际容量为7.45G。
1G=1024M。7.45G=7.45*M。
一切皆有可能_2358理论上是8×1024,但一般厂家的算法是8×1000,内存卡本身还占用一部分空间,所以实际可使用的容量是7G多
热门问答123456789101112131415161718192021222324252627282930相关问答1个回答黑客梦工场你好,据了解买房8楼挺好的,8楼通风采光良好。兼具高层和多层住宅优势,有比多层更好的室内通风采光,又有踏实居住的感觉,在房间内望出窗外,面向片区景色一览无遗,视野较好。而且居住成本...3个回答旗舰装饰wp分为四种情况:
一般台式机处理器和标准版高压版笔记本处理器(也就是后缀带M/Q/HQ等字母的处理器)为四核八线程处理器。
部分台式机处理器,像Intel
酷睿i7 ...3个回答橙果常成8英寸平板电脑1200元,平板电脑的英寸是对角线长度,以屏幕对角线来表示尺寸大小,一英寸约等于2.54厘米,7英寸为17.78厘米,8英寸为20.32厘米。也就是说在纸上画一个对角...3个回答wangjiejjs黄金戒指的克数得看你选的戒指大小,一般情况下的话,男士戒指在6-8克之间,女士的则以3-5克的为主。具体的话还是去克徕帝这样的店子去询问吧,更靠谱。
3个回答盖益惜玉7.9英寸是到ipad mini3;9.7英寸的话现在是到ipad air2;
两者系统均是ios8系统;
ipad mini3的128GB版本是4288¥;
i...3个回答静风观望落叶的美你所说的8寸是指的是喇叭吧,这个不是单纯参考喇叭的外部大小,要参照喇叭的实际的额定功率。
再一个你要配多少的功放,这个我打个比方,比如改喇叭为额定功率200W;当然也有最大输...3个回答guilunmeiA.O.史密斯EWH-80E9W电热水器电热水器 6188元
 特有的快进加热科技可以在短时间内迅速将水加热,令您享受即开即洗的畅快;当需要大量用水时,开启“MAX增容模式”...3个回答阿龙_7974采用的是1斤16两制,即古代的半斤八两制度,1两=37.5克,1两=10钱,1钱=10分,八两是400g,现在银行的准确报价为每克黄金258元/克
  1、看色泽
 ...3个回答人选无敌真8核的意思是,所有的CPU都会一起运作。真八核手机有中兴青漾 、Vivo X3S、TCL Idol X+,荣耀3X,联想S939还有最近刚刚发布的红米Note,八核手机的普遍售价...3个回答命由己造_刚买了一台,在实体店买的,买成4380元,屏幕的对角线长度是8英寸=25.4mm×8=203.2mm=20.32厘米,以上价格来源于网络仅供参考内存卡同是u3一个90mb/s一个95mb/s差别大吗_百度知道
内存卡同是u3一个90mb/s一个95mb/s差别大吗
我有更好的答案
如果是手机参数如果真实,基本感觉不出来差异,装到相机或摄像机上使用
我是装摄像机的。价格相差1/3,有必要上95mb的吗
参数真实的话,没必要.
帮忙看下这两个,一个速度c10一个80x
这个看不出来。网上所说非所得太多了。
你的摄像机用的是小卡?
买大卡比较好。
因为有时会放手机上。所以就买小的。
速度接近,差别不多。
为您推荐:
其他类似问题
换一换
回答问题,赢新手礼包
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。(window.slotbydup=window.slotbydup || []).push({
id: '2014386',
container: s,
size: '234,60',
display: 'inlay-fix'
&&|&&2次下载&&|&&总112页&&|
您的计算机尚未安装Flash,点击安装&
试读已结束,如果需要继续阅读或下载,请使用积分()
下载:40积分
本文档由合作伙伴提供
0人评价49页
0人评价82页
0人评价77页
0人评价3页
0人评价22页
此文档归属以下专题
6人评价150个文档
所需积分:(友情提示:大部分文档均可免费预览!下载之前请务必先预览阅读,以免误下载造成积分浪费!)
(多个标签用逗号分隔)
文不对题,内容与标题介绍不符
广告内容或内容过于简单
文档乱码或无法正常显示
文档内容侵权
已存在相同文档
不属于经济管理类文档
源文档损坏或加密
若此文档涉嫌侵害了您的权利,请参照说明。
我要评价:
价格:40积分VIP价:32积分Windows 能干而 Linux 干不了的事情,那就是不需要干的事情!
MongoDB的一些知识
mongodb的Windows客户端:MongoChef
mongochef-x64.msi mongodb管理工具
连接复制集和分片集群都可以& ,而且不崩溃
需要在Windows上安装mongodb,调用mongo.exe
角色分两种:1、针对实例上所有数据库&& 有5种权限2、针对单个数据库& 有3种权限db.createUser({user:"lyhabcd",pwd:"123456",roles:[{role:"dbOwner",db:"testdb"}]})&& //针对所有数据库db:"admin"& 针对单个数据库 db:"dbname"
用户权限控制
实例上的所有用户都保存在admin数据库下的system.users 集合里
//没有权限
hard0:RECOVERING&
2015-12-15T16:35:10.504+0800 E QUERY
Error: listDatabases failed:{
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
"code" : 13
at Error (&anonymous&)
at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
at shellHelper.show (src/mongo/shell/utils.js:630:33)
at shellHelper (src/mongo/shell/utils.js:524:36)
at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
rsshard0:RECOVERING& show dbs
2015-12-15T16:35:13.263+0800 E QUERY
Error: listDatabases failed:{
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
"code" : 13
at Error (&anonymous&)
at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
at shellHelper.show (src/mongo/shell/utils.js:630:33)
at shellHelper (src/mongo/shell/utils.js:524:36)
at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
rsshard0:RECOVERING& exit
//改配置文件
vi /data/replset0/config/rs0.conf
journal=true
//开启权限
replSet=rsshard0
dbpath = /data/replset0/data/rs0
shardsvr = true
oplogSize = 100
pidfilepath = /usr/local/mongodb/mongodb0.pid
logpath = /data/replset0/log/rs0.log
logappend = true
profile = 1
slowms = 5
fork = true
//添加用户
拥有root角色
rsshard0:PRIMARY& use admin
rsshard0:PRIMARY& db.createUser({user:"lyhabc",pwd:"123456",roles:[{role:"root",db:"admin"}]})
Successfully added user: {
"user" : "lyhabc",
"roles" : [
"role" : "root",
"db" : "admin"
/usr/local/mongodb/bin/mongod
/data/replset0/config/rs0.conf
mongo --port 4000
--authenticationDatabase admin
# cat /data/replset0/log/rs0.log
2015-12-15T17:07:12.388+0800 I COMMAND
[conn38] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rsshard0", pv: 1, v: 2, from: "192.168.14.198:4000", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:142 locks:{} 19ms
2015-12-15T17:07:13.595+0800 I NETWORK
[conn37] end connection 192.168.14.221:43932 (1 connection now open)
2015-12-15T17:07:13.596+0800 I NETWORK
[initandlisten] connection accepted from 192.168.14.221:44114 #39 (2 connections now open)
2015-12-15T17:07:14.393+0800 I NETWORK
[conn38] end connection 192.168.14.198:35566 (1 connection now open)
2015-12-15T17:07:14.394+0800 I NETWORK
[initandlisten] connection accepted from 192.168.14.198:35568 #40 (2 connections now open)
2015-12-15T17:07:15.277+0800 I NETWORK
[initandlisten] connection accepted from 127.0.0.1:46271 #41 (3 connections now open)
2015-12-15T17:07:15.283+0800 I ACCESS
[conn41] SCRAM-SHA-1 authentication failed for lyhabc on admin from client 127.0.0.1 ; UserNotFound Could not find user lyhabc@admin
2015-12-15T17:07:15.291+0800 I NETWORK
[conn41] end connection 127.0.0.1:46271 (2 connections now open)
打开web后台
vi /data/replset0/config/rs0.conf
journal=true
//打开更多web后天监控指标
httpinterface=true
//打开web后台
replSet=rsshard0
dbpath = /data/replset0/data/rs0
shardsvr = true
oplogSize = 100
pidfilepath = /usr/local/mongodb/mongodb0.pid
logpath = /data/replset0/log/rs0.log
logappend = true
profile = 1
slowms = 5
fork = true
db.serverStatus()
"resident" : 86,
//当前所使用的物理内存总量
如果超过系统内存表示系统内存过小
"supported" : true,
//系统是否支持可扩展内存
"mapped" : 368,
//映射数据文件所使用的内存大小
如果超过系统内存表示系统内存过小 需要使用swap
映射的空间比内存空间还要大
"extra_info" : {
"note" : "fields vary by platform",
"heap_usage_bytes" : ,
"page_faults" : 63
//缺页中断的次数
内存不够缺页中断也会增多
"activeClients" : {
"total" : 12,
//连接到mongodb实例的连接数
mongodb监控工具
mongostat --port 4000
mongotop --port 4000 --locks
mongotop --port 4000
rsshard0:SECONDARY& db.serverStatus()
"host" : "steven:4000",
"version" : "3.0.7",
"process" : "mongod",
"pid" : NumberLong(1796),
"uptime" : 63231,
"uptimeMillis" : NumberLong(),
"uptimeEstimate" : 3033,
"localTime" : ISODate("T03:26:39.707Z"),
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 2226,
"rollovers" : 0
"backgroundFlushing" : {
"flushes" : 57,
"total_ms" : 61,
"average_ms" : 1.4912,
"last_ms" : 0,
"last_finished" : ISODate("T03:26:33.960Z")
"connections" : {
"current" : 1,
"available" : 818,
"totalCreated" : NumberLong(15)
"cursors" : {
"note" : "deprecated, use server status metrics",
"clientCursors_size" : 0,
"totalOpen" : 0,
"pinned" : 0,
"totalNoTimeout" : 0,
"timedOut" : 0
"commits" : 28,
"journaledMB" : 0,
"writeToDataFilesMB" : 0,
"compression" : 0,
"commitsInWriteLock" : 0,
"earlyCommits" : 0,
"timeMs" : {
"dt" : 3010,
"prepLogBuffer" : 0,
"writeToJournal" : 0,
"writeToDataFiles" : 0,
"remapPrivateView" : 0,
"commits" : 33,
"commitsInWriteLock" : 0
"extra_info" : {
"note" : "fields vary by platform",
"heap_usage_bytes" : ,
"page_faults" : 63
//缺页中断的次数
内存不够缺页中断也会增多
"globalLock" : {
"totalTime" : NumberLong(""),
"currentQueue" : {
"total" : 0,
//如果这个值一直很大,表示并发问题 锁太长时间
"readers" : 0,
"writers" : 0
"activeClients" : {
"total" : 12,
//连接到mongodb实例的连接数
"readers" : 0,
"writers" : 0
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(27371),
"w" : NumberLong(21),
"R" : NumberLong(1),
"W" : NumberLong(5)
"acquireWaitCount" : {
"r" : NumberLong(1)
"timeAcquiringMicros" : {
"r" : NumberLong(135387)
"MMAPV1Journal" : {
"acquireCount" : {
"r" : NumberLong(13668),
"w" : NumberLong(45),
"R" : NumberLong(31796)
"acquireWaitCount" : {
"w" : NumberLong(4),
"R" : NumberLong(5)
"timeAcquiringMicros" : {
"w" : NumberLong(892),
"R" : NumberLong(1278323)
"Database" : {
"acquireCount" : {
"r" : NumberLong(13665),
"R" : NumberLong(7),
"W" : NumberLong(21)
"acquireWaitCount" : {
"W" : NumberLong(1)
"timeAcquiringMicros" : {
"W" : NumberLong(21272)
"Collection" : {
"acquireCount" : {
"R" : NumberLong(13490)
"Metadata" : {
"acquireCount" : {
"R" : NumberLong(1)
"oplog" : {
"acquireCount" : {
"R" : NumberLong(900)
"network" : {
"bytesIn" : NumberLong(7646),
"bytesOut" : NumberLong(266396),
"numRequests" : NumberLong(113)
"opcounters" : {
"insert" : 0,
"query" : 7,
"update" : 0,
"delete" : 0,
"getmore" : 0,
"command" : 107
"opcountersRepl" : {
"insert" : 0,
"query" : 0,
"update" : 0,
"delete" : 0,
"getmore" : 0,
"command" : 0
"repl" : {
"setName" : "rsshard0",
"setVersion" : 2,
"ismaster" : false,
"secondary" : true,
"hosts" : [
"192.168.1.155:4000",
"192.168.14.221:4000",
"192.168.14.198:4000"
"me" : "192.168.1.155:4000",
"storageEngine" : {
"name" : "mmapv1"
"writeBacksQueued" : false,
"bits" : 64,
//跑在64位系统上
"resident" : 86,
//当前所使用的物理内存总量
"virtual" : 1477,
//mongodb进程所映射的虚拟内存总量
"supported" : true,
//系统是否支持可扩展内存
"mapped" : 368,
//映射数据文件所使用的内存大小
"mappedWithJournal" : 736
//映射Journaling所使用的内存大小
"metrics" : {
"commands" : {
"count" : {
"failed" : NumberLong(0),
"total" : NumberLong(6)
"dbStats" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
"getLog" : {
"failed" : NumberLong(0),
"total" : NumberLong(4)
"getnonce" : {
"failed" : NumberLong(0),
"total" : NumberLong(11)
"isMaster" : {
"failed" : NumberLong(0),
"total" : NumberLong(16)
"listCollections" : {
"failed" : NumberLong(0),
"total" : NumberLong(2)
"listDatabases" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
"listIndexes" : {
"failed" : NumberLong(0),
"total" : NumberLong(2)
"ping" : {
"failed" : NumberLong(0),
"total" : NumberLong(18)
"replSetGetStatus" : {
"failed" : NumberLong(0),
"total" : NumberLong(15)
"replSetStepDown" : {
"failed" : NumberLong(1),
"total" : NumberLong(1)
"serverStatus" : {
"failed" : NumberLong(0),
"total" : NumberLong(21)
"failed" : NumberLong(0),
"total" : NumberLong(5)
"whatsmyuri" : {
"failed" : NumberLong(0),
"total" : NumberLong(4)
"cursor" : {
"timedOut" : NumberLong(0),
"open" : {
"noTimeout" : NumberLong(0),
"pinned" : NumberLong(0),
"total" : NumberLong(0)
"document" : {
"deleted" : NumberLong(0),
"inserted" : NumberLong(0),
"returned" : NumberLong(5),
"updated" : NumberLong(0)
"getLastError" : {
"wtime" : {
"num" : 0,
"totalMillis" : 0
"wtimeouts" : NumberLong(0)
"operation" : {
"fastmod" : NumberLong(0),
"idhack" : NumberLong(0),
"scanAndOrder" : NumberLong(0),
"writeConflicts" : NumberLong(0)
"queryExecutor" : {
"scanned" : NumberLong(2),
"scannedObjects" : NumberLong(5)
"record" : {
"moves" : NumberLong(0)
"repl" : {
"apply" : {
"batches" : {
"num" : 0,
"totalMillis" : 0
"ops" : NumberLong(0)
"buffer" : {
"count" : NumberLong(0),
"maxSizeBytes" : ,
"sizeBytes" : NumberLong(0)
"network" : {
"bytes" : NumberLong(0),
"getmores" : {
"num" : 0,
"totalMillis" : 0
"ops" : NumberLong(0),
"readersCreated" : NumberLong(1)
"preload" : {
"docs" : {
"num" : 0,
"totalMillis" : 0
"indexes" : {
"num" : 0,
"totalMillis" : 0
"storage" : {
"freelist" : {
"search" : {
"bucketExhausted" : NumberLong(0),
"requests" : NumberLong(0),
"scanned" : NumberLong(0)
"deletedDocuments" : NumberLong(0),
"passes" : NumberLong(56)
rsshard0:SECONDARY& use aaaa
switched to db aaaa
rsshard0:SECONDARY& db.stats()
"db" : "aaaa",
"collections" : 4,
"objects" : 7,
"avgObjSize" : 149.72,
"dataSize" : 1048,
"storageSize" : 1069056,
"numExtents" : 4,
"indexes" : 1,
"indexSize" : 8176,
"fileSize" : ,
"nsSizeMB" : 16,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
"dataFileVersion" : {
"major" : 4,
"minor" : 22
tail /data/replset1/log/rs1.log
netstat -lnp |grep mongo
--导出文本备份
mongoexport --port 4000 --db aaaa
--collection testaaa
--out :/tmp/aaaa.csv
sz :/tmp/aaaa.csv
--导出二进制备份
mongodump --port 4000 --db aaaa --out /tmp/aaaa.bak
cd /tmp/aaaa.bak/
--打包备份
tar zcvf aaaa.tar.gz
sz aaaa.tar.gz
---C 输出十六进制和对应字符
hexdump -C testaaa.bson
MongoDB复制集成员的重新同步(mongodb内部有一个 initial sync进程不停初始化同步)http://www.linuxidc.com/Linux/981.htm?utm_source=tuicool&utm_medium=referral
关闭 mongod 进程。通过在 mongo 窗口中使用 db.shutdownServer() 命令或者在Linux系统中使用 mongod --shutdown 参数来安全关闭
use admin;
db.shutdownServer() ;
mongod --shutdown
&MongoDB中的_id和ObjectIdhttp://blog.csdn.net/magneto7/article/details/?utm_source=tuicool&utm_medium=referral
数据表的复制 db.runCommand({cloneCollection:"库.表",from:"198.61.104.31:27017"});数据库的复制& db.copyDatabase("源库","目标库","198.61.104.31:27017");刷新磁盘:将内存中尚未写入磁盘的信息写入磁盘,并锁住对数据库更新的操作,但读操作可以使用,使用runCommand命令格式:db.runCommand({fsync:1,async:true})async:是否异步执行lock:1 锁定数据库
Query Translator
http://www.querymongo.com/
{ "_id" : ObjectId("3e"), "id" : 2 }{ "_id" : ObjectId("be4a0c"), "id" : 1 }{ "_id" : 121, "age" : 22, "Attribute" : 33 }第一次插入数据时不需要先创建collection,插入数据会自动建立每次插入数据如果没有指定_id字段,系统会默认创建一个主键_id,ObjectId类型 更好支持分布式存储& ObjectId类型为12字节 4字节时间戳 3字节机器唯一标识 2字节进程id 3字节随机计数器每个集合都必须有一个_id字段,不管是自动生成还是指定的,而且不能重复插入语句db.users.insert({id:1},{class:1})更新语句db.people.update({country:"JP"},{$set:{country:"DDDDDDD"}},{multi:true})删除语句db.people.remove({country:"DDDDDDD"}) //不删除索引db.people.drop()&& //删除数据和索引db.people.dropIndexes()&&& //删除所有索引db.people.dropIndex()&&& //删除特定索引
db.system.indexes.find(){&&& "createdCollectionAutomatically" : false,&&& "numIndexesBefore" : 2, _id一个隐藏索引加db.people.ensureIndex({name:1},{unique:true}) 总共两个索引&&& "numIndexesAfter" : 3,&&& "ok" : 1}
MongoDB文档定位字段下面是一个数组&& 字段.数组元素下标字段下面是一个嵌套文档& 字段.嵌套文档某个key字段下面是一个嵌套文档数组& 字段.数组元素下标.嵌套文档某个key
MongoDB两个100ms1、100ms做一次checkpoint 写一次journal日志文件2、超过100ms的查询会记录到慢查询日志
MongoDB的日志
cat /data/mongodb/logs//mongo.log
每个库一个文件夹
2015-10-30T05:59:12.386+0800 I JOURNAL
[initandlisten] journal dir=/data/mongodb/data/journal
2015-10-30T05:59:12.386+0800 I JOURNAL
[initandlisten] recover : no journal files present, no recovery needed
2015-10-30T05:59:12.518+0800 I JOURNAL
[durability] Durability thread started
2015-10-30T05:59:12.518+0800 I JOURNAL
[journal writer] Journal writer thread started
2015-10-30T05:59:12.521+0800 I CONTROL
[initandlisten] MongoDB starting : pid=4479 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven
2015-10-30T05:59:12.521+0800 I CONTROL
[initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] **
We suggest setting it to 'never'
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] **
We suggest setting it to 'never'
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] db version v3.0.7
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] git version: 6ce7cbe8c6b899552daddaa2e9bd
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] allocator: tcmalloc
2015-10-30T05:59:12.522+0800 I CONTROL
[initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } }
2015-10-30T05:59:12.536+0800 I INDEX
[initandlisten] allocating new ns file /data/mongodb/data/local/local.ns, filling with zeroes...
2015-10-30T05:59:12.858+0800 I STORAGE
[FileAllocator] allocating new datafile /data/mongodb/data/local/local.0, filling with zeroes...
//填0初始化
2015-10-30T05:59:12.858+0800 I STORAGE
[FileAllocator] creating directory /data/mongodb/data/local/_tmp
2015-10-30T05:59:12.866+0800 I STORAGE
[FileAllocator] done allocating datafile /data/mongodb/data/local/local.0, size: 64MB,
took 0.001 secs
2015-10-30T05:59:12.876+0800 I NETWORK
[initandlisten] waiting for connections on port 27017
2015-10-30T05:59:14.325+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:40766 #1 (1 connection now open)
2015-10-30T05:59:14.328+0800 I NETWORK
[conn1] end connection 192.168.1.106:40766 (0 connections now open)
2015-10-30T05:59:24.339+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:40769 #2 (1 connection now open)
//接受192.168.1.106的连接
2015-10-30T06:00:20.348+0800 I CONTROL
[signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2015-10-30T06:00:20.348+0800 I CONTROL
[signalProcessingThread] now exiting
2015-10-30T06:00:20.348+0800 I NETWORK
[signalProcessingThread] shutdown: going to close listening sockets...
2015-10-30T06:00:20.348+0800 I NETWORK
[signalProcessingThread] closing listening socket: 6
2015-10-30T06:00:20.348+0800 I NETWORK
[signalProcessingThread] closing listening socket: 7
2015-10-30T06:00:20.348+0800 I NETWORK
[signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
//socket方式通信
2015-10-30T06:00:20.348+0800 I NETWORK
[signalProcessingThread] shutdown: going to flush diaglog...
2015-10-30T06:00:20.348+0800 I NETWORK
[signalProcessingThread] shutdown: going to close sockets...
2015-10-30T06:00:20.348+0800 I STORAGE
[signalProcessingThread] shutdown: waiting for fs preallocator...
2015-10-30T06:00:20.348+0800 I STORAGE
[signalProcessingThread] shutdown: final commit...
2015-10-30T06:00:20.349+0800 I JOURNAL
[signalProcessingThread] journalCleanup...
2015-10-30T06:00:20.349+0800 I JOURNAL
[signalProcessingThread] removeJournalFiles
2015-10-30T06:00:20.349+0800 I NETWORK
[conn2] end connection 192.168.1.106:40769 (0 connections now open)
2015-10-30T06:00:20.356+0800 I JOURNAL
[signalProcessingThread] Terminating durability thread ...
2015-10-30T06:00:20.453+0800 I JOURNAL
[journal writer] Journal writer thread stopped
2015-10-30T06:00:20.454+0800 I JOURNAL
[durability] Durability thread stopped
2015-10-30T06:00:20.455+0800 I STORAGE
[signalProcessingThread] shutdown: closing all files...
2015-10-30T06:00:20.457+0800 I STORAGE
[signalProcessingThread] closeAllFiles() finished
2015-10-30T06:00:20.457+0800 I STORAGE
[signalProcessingThread] shutdown: removing fs lock...
2015-10-30T06:00:20.457+0800 I CONTROL
[signalProcessingThread] dbexit:
2015-10-30T06:01:20.259+0800 I CONTROL
***** SERVER RESTARTED *****
2015-10-30T06:01:20.290+0800 I JOURNAL
[initandlisten] journal dir=/data/mongodb/data/journal
2015-10-30T06:01:20.291+0800 I JOURNAL
[initandlisten] recover : no journal files present, no recovery needed
2015-10-30T06:01:20.439+0800 I JOURNAL
[initandlisten] preallocateIsFaster=true 2.36
2015-10-30T06:01:20.544+0800 I JOURNAL
[durability] Durability thread started
2015-10-30T06:01:20.546+0800 I JOURNAL
[journal writer] Journal writer thread started
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] MongoDB starting : pid=4557 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] **
We suggest setting it to 'never'
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] **
We suggest setting it to 'never'
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] db version v3.0.7
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] git version: 6ce7cbe8c6b899552daddaa2e9bd
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] allocator: tcmalloc
2015-10-30T06:01:20.547+0800 I CONTROL
[initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } }
2015-10-30T06:01:20.582+0800 I NETWORK
[initandlisten] waiting for connections on port 27017
2015-10-30T06:01:28.390+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:40798 #1 (1 connection now open)
2015-10-30T06:01:28.398+0800 I NETWORK
[conn1] end connection 192.168.1.106:40798 (0 connections now open)
2015-10-30T06:01:38.394+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:40800 #2 (1 connection now open)
2015-10-30T07:01:39.383+0800 I NETWORK
[conn2] end connection 192.168.1.106:40800 (0 connections now open)
2015-10-30T07:01:39.384+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:42327 #3 (1 connection now open)
2015-10-30T07:32:40.910+0800 I NETWORK
[conn3] end connection 192.168.1.106:42327 (0 connections now open)
2015-10-30T07:32:40.910+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:43130 #4 (2 connections now open)
2015-10-30T08:32:43.957+0800 I NETWORK
[conn4] end connection 192.168.1.106:43130 (0 connections now open)
2015-10-30T08:32:43.957+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:46481 #5 (2 connections now open)
2015-10-31T04:27:00.240+0800 I CONTROL
***** SERVER RESTARTED *****
//服务器非法关机,需要recover
凤胜踢了机器电源
2015-10-31T04:27:00.703+0800 W -
[initandlisten] Detected unclean shutdown - /data/mongodb/data/mongod.lock is not empty.
//检测到不是clean shutdown
2015-10-31T04:27:00.812+0800 I JOURNAL
[initandlisten] journal dir=/data/mongodb/data/journal
2015-10-31T04:27:00.812+0800 I JOURNAL
[initandlisten] recover begin
//mongodb开始还原 记录lsn
2015-10-31T04:27:01.048+0800 I JOURNAL
[initandlisten] recover lsn: 6254831
2015-10-31T04:27:01.048+0800 I JOURNAL
[initandlisten] recover /data/mongodb/data/journal/j._0
2015-10-31T04:27:01.089+0800 I JOURNAL
[initandlisten] recover skipping application of section seq:0 & lsn:6254831
2015-10-31T04:27:01.631+0800 I JOURNAL
[initandlisten] recover cleaning up
2015-10-31T04:27:01.632+0800 I JOURNAL
[initandlisten] removeJournalFiles
2015-10-31T04:27:01.680+0800 I JOURNAL
[initandlisten] recover done
2015-10-31T04:27:03.006+0800 I JOURNAL
[initandlisten] preallocateIsFaster=true 25.68
2015-10-31T04:27:04.076+0800 I JOURNAL
[initandlisten] preallocateIsFaster=true 19.9
2015-10-31T04:27:06.896+0800 I JOURNAL
[initandlisten] preallocateIsFaster=true 35.5
2015-10-31T04:27:06.896+0800 I JOURNAL
[initandlisten] preallocateIsFaster check took 5.215 secs
2015-10-31T04:27:06.896+0800 I JOURNAL
[initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.0
2015-10-31T04:27:09.005+0800 I -
[initandlisten]
File Preallocator Progress: / 30%
2015-10-31T04:27:12.236+0800 I -
[initandlisten]
File Preallocator Progress: / 41%
2015-10-31T04:27:15.006+0800 I -
[initandlisten]
File Preallocator Progress: / 66%
2015-10-31T04:27:18.146+0800 I -
[initandlisten]
File Preallocator Progress: / 76%
2015-10-31T04:27:21.130+0800 I -
[initandlisten]
File Preallocator Progress: / 84%
2015-10-31T04:27:24.477+0800 I -
[initandlisten]
File Preallocator Progress: / 94%
2015-10-31T04:28:08.132+0800 I JOURNAL
[initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.1
2015-10-31T04:28:11.904+0800 I -
[initandlisten]
File Preallocator Progress: / 58%
2015-10-31T04:28:14.260+0800 I -
[initandlisten]
File Preallocator Progress: / 64%
2015-10-31T04:28:17.335+0800 I -
[initandlisten]
File Preallocator Progress: / 74%
2015-10-31T04:28:20.440+0800 I -
[initandlisten]
File Preallocator Progress: / 80%
2015-10-31T04:28:23.274+0800 I -
[initandlisten]
File Preallocator Progress: / 85%
2015-10-31T04:28:26.638+0800 I -
[initandlisten]
File Preallocator Progress: / 94%
2015-10-31T04:29:01.643+0800 I JOURNAL
[initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.2
2015-10-31T04:29:04.032+0800 I -
[initandlisten]
File Preallocator Progress: / 41%
2015-10-31T04:29:09.015+0800 I -
[initandlisten]
File Preallocator Progress: / 52%
2015-10-31T04:29:12.181+0800 I -
[initandlisten]
File Preallocator Progress: / 77%
2015-10-31T04:29:15.125+0800 I -
[initandlisten]
File Preallocator Progress: / 89%
2015-10-31T04:29:34.755+0800 I JOURNAL
[durability] Durability thread started
2015-10-31T04:29:34.755+0800 I JOURNAL
[journal writer] Journal writer thread started
2015-10-31T04:29:35.029+0800 I CONTROL
[initandlisten] MongoDB starting : pid=1672 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] **
We suggest setting it to 'never'
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] **
We suggest setting it to 'never'
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] db version v3.0.7
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] git version: 6ce7cbe8c6b899552daddaa2e9bd
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] allocator: tcmalloc
2015-10-31T04:29:35.031+0800 I CONTROL
[initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } }
2015-10-31T04:29:36.869+0800 I NETWORK
[initandlisten] waiting for connections on port 27017
2015-10-31T04:39:39.671+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:3134 #1 (1 connection now open)
2015-10-31T04:39:40.042+0800 I COMMAND
[conn1] command admin.$cmd command: isMaster { isMaster: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:178 locks:{} 229ms
2015-10-31T04:39:40.379+0800 I NETWORK
[conn1] end connection 192.168.1.106:3134 (0 connections now open)
2015-10-31T04:40:10.117+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:3137 #2 (1 connection now open)
2015-10-31T04:40:13.357+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:3138 #3 (2 connections now open)
2015-10-31T04:40:13.805+0800 I COMMAND
[conn3] command local.$cmd command: usersInfo { usersInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:49 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 304ms
2015-10-31T04:49:30.223+0800 I NETWORK
[conn2] end connection 192.168.1.106:3137 (1 connection now open)
2015-10-31T04:49:30.223+0800 I NETWORK
[conn3] end connection 192.168.1.106:3138 (0 connections now open)
2015-10-31T04:56:27.271+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:4335 #4 (1 connection now open)
2015-10-31T04:56:29.449+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:4336 #5 (2 connections now open)
2015-10-31T04:58:17.514+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:4356 #6 (3 connections now open)
2015-10-31T05:02:55.219+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:4902 #7 (4 connections now open)
2015-10-31T05:03:57.954+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:4907 #8 (5 connections now open)
2015-10-31T05:10:25.905+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:5064 #9 (6 connections now open)
2015-10-31T05:16:00.026+0800 I NETWORK
[conn7] end connection 192.168.1.106:4902 (5 connections now open)
2015-10-31T05:16:00.101+0800 I NETWORK
[conn8] end connection 192.168.1.106:4907 (4 connections now open)
2015-10-31T05:16:00.163+0800 I NETWORK
[conn9] end connection 192.168.1.106:5064 (3 connections now open)
2015-10-31T05:26:28.837+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:5654 #10 (4 connections now open)
2015-10-31T05:26:28.837+0800 I NETWORK
[conn4] end connection 192.168.1.106:4335 (2 connections now open)
2015-10-31T05:26:30.969+0800 I NETWORK
[conn5] end connection 192.168.1.106:4336 (2 connections now open)
2015-10-31T05:26:30.973+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:5655 #11 (3 connections now open)
2015-10-31T05:56:30.336+0800 I NETWORK
[conn10] end connection 192.168.1.106:5654 (2 connections now open)
2015-10-31T05:56:30.337+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:6153 #12 (3 connections now open)
2015-10-31T05:56:32.457+0800 I NETWORK
[conn11] end connection 192.168.1.106:5655 (2 connections now open)
2015-10-31T05:56:32.458+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:6154 #13 (4 connections now open)
2015-10-31T06:26:31.837+0800 I NETWORK
[conn12] end connection 192.168.1.106:6153 (2 connections now open)
2015-10-31T06:26:31.838+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:6514 #14 (3 connections now open)
2015-10-31T06:26:33.961+0800 I NETWORK
[conn13] end connection 192.168.1.106:6154 (2 connections now open)
2015-10-31T06:26:33.962+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:6515 #15 (4 connections now open)
2015-10-31T06:27:09.518+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:6563 #16 (4 connections now open)
2015-10-31T06:29:57.407+0800 I INDEX
[conn16] allocating new ns file /data/mongodb/data/testlyh/testlyh.ns, filling with zeroes...
2015-10-31T06:29:57.846+0800 I STORAGE
[FileAllocator] allocating new datafile /data/mongodb/data/testlyh/testlyh.0, filling with zeroes...
2015-10-31T06:29:57.847+0800 I STORAGE
[FileAllocator] creating directory /data/mongodb/data/testlyh/_tmp
2015-10-31T06:29:57.871+0800 I STORAGE
[FileAllocator] done allocating datafile /data/mongodb/data/testlyh/testlyh.0, size: 64MB,
took 0.003 secs
2015-10-31T06:29:57.890+0800 I COMMAND
[conn16] command testlyh.$cmd command: create { create: "temporary" } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, MMAPV1Journal: { acquireCount: { w: 6 } }, Database: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 483ms
2015-10-31T06:29:57.894+0800 I COMMAND
[conn16] CMD: drop testlyh.temporary
2015-10-31T06:45:06.955+0800 I NETWORK
[conn16] end connection 192.168.1.106:6563 (3 connections now open)
2015-10-31T06:56:33.323+0800 I NETWORK
[conn14] end connection 192.168.1.106:6514 (2 connections now open)
2015-10-31T06:56:33.324+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:7692 #17 (3 connections now open)
2015-10-31T06:56:35.461+0800 I NETWORK
[conn15] end connection 192.168.1.106:6515 (2 connections now open)
2015-10-31T06:56:35.462+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:7693 #18 (4 connections now open)
2015-10-31T07:13:30.230+0800 I NETWORK
[initandlisten] connection accepted from 127.0.0.1:51696 #19 (4 connections now open)
2015-10-31T07:21:06.715+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:8237 #20 (5 connections now open)
2015-10-31T07:21:32.193+0800 I INDEX
[conn6] build index on: local.people properties: { v: 1, unique: true, key: { name: 1.0 }, name: "name_1", ns: "local.people" }
//创建索引
2015-10-31T07:21:32.193+0800 I INDEX
building index using bulk method
//bulk insert方式建立索引
2015-10-31T07:21:32.194+0800 I INDEX
[conn6] build index done.
scanned 36 total records. 0 secs
2015-10-31T07:26:34.826+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:8328 #21 (6 connections now open)
2015-10-31T07:26:34.827+0800 I NETWORK
[conn17] end connection 192.168.1.106:7692 (4 connections now open)
2015-10-31T07:26:36.962+0800 I NETWORK
[conn18] end connection 192.168.1.106:7693 (4 connections now open)
2015-10-31T07:26:36.963+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:8329 #22 (6 connections now open)
2015-10-31T07:51:08.214+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9202 #23 (6 connections now open)
2015-10-31T07:51:08.214+0800 I NETWORK
[conn20] end connection 192.168.1.106:8237 (4 connections now open)
2015-10-31T07:56:36.327+0800 I NETWORK
[conn21] end connection 192.168.1.106:8328 (4 connections now open)
2015-10-31T07:56:36.328+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9310 #24 (6 connections now open)
2015-10-31T07:56:38.450+0800 I NETWORK
[conn22] end connection 192.168.1.106:8329 (4 connections now open)
2015-10-31T07:56:38.452+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9313 #25 (5 connections now open)
2015-10-31T08:03:56.823+0800 I NETWORK
[conn25] end connection 192.168.1.106:9313 (4 connections now open)
2015-10-31T08:03:58.309+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9470 #26 (5 connections now open)
2015-10-31T08:03:58.309+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9471 #27 (6 connections now open)
2015-10-31T08:03:58.313+0800 I NETWORK
[conn26] end connection 192.168.1.106:9470 (5 connections now open)
2015-10-31T08:03:58.314+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9469 #28 (6 connections now open)
2015-10-31T08:03:58.315+0800 I NETWORK
[conn27] end connection 192.168.1.106:9471 (5 connections now open)
2015-10-31T08:03:58.317+0800 I NETWORK
[conn28] end connection 192.168.1.106:9469 (4 connections now open)
2015-10-31T08:04:04.852+0800 I NETWORK
[conn19] end connection 127.0.0.1:51696 (3 connections now open)
2015-10-31T08:04:05.944+0800 I NETWORK
[conn23] end connection 192.168.1.106:9202 (2 connections now open)
2015-10-31T08:04:06.215+0800 I NETWORK
[conn24] end connection 192.168.1.106:9310 (1 connection now open)
2015-10-31T08:04:09.233+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9531 #29 (2 connections now open)
2015-10-31T08:04:09.233+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9530 #30 (3 connections now open)
2015-10-31T08:04:09.233+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:9532 #31 (4 connections now open)
2015-10-31T08:34:18.767+0800 I NETWORK
[conn29] end connection 192.168.1.106:9531 (3 connections now open)
2015-10-31T08:34:18.767+0800 I NETWORK
[conn30] end connection 192.168.1.106:9530 (3 connections now open)
2015-10-31T08:34:18.769+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10157 #32 (3 connections now open)
2015-10-31T08:34:18.769+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10158 #33 (4 connections now open)
2015-10-31T08:34:18.771+0800 I NETWORK
[conn31] end connection 192.168.1.106:9532 (3 connections now open)
2015-10-31T08:34:18.774+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10159 #34 (4 connections now open)
2015-10-31T08:36:23.662+0800 I NETWORK
[conn33] end connection 192.168.1.106:10158 (3 connections now open)
2015-10-31T08:36:23.933+0800 I NETWORK
[conn6] end connection 192.168.1.106:4356 (2 connections now open)
2015-10-31T08:36:24.840+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10238 #35 (3 connections now open)
2015-10-31T08:36:24.840+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10239 #36 (4 connections now open)
2015-10-31T08:36:24.844+0800 I NETWORK
[conn36] end connection 192.168.1.106:10239 (3 connections now open)
2015-10-31T08:36:24.845+0800 I NETWORK
[conn35] end connection 192.168.1.106:10238 (2 connections now open)
2015-10-31T08:36:28.000+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10279 #37 (3 connections now open)
2015-10-31T08:36:28.004+0800 I NETWORK
[conn37] end connection 192.168.1.106:10279 (2 connections now open)
2015-10-31T08:36:32.751+0800 I NETWORK
[conn32] end connection 192.168.1.106:10157 (1 connection now open)
2015-10-31T08:36:32.756+0800 I NETWORK
[conn34] end connection 192.168.1.106:10159 (0 connections now open)
2015-10-31T08:36:35.835+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10339 #38 (1 connection now open)
2015-10-31T08:36:35.837+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10341 #39 (2 connections now open)
2015-10-31T08:36:35.837+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:10340 #40 (3 connections now open)
2015-10-31T09:06:45.368+0800 I NETWORK
[conn39] end connection 192.168.1.106:10341 (2 connections now open)
2015-10-31T09:06:45.370+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:12600 #41 (3 connections now open)
2015-10-31T09:06:45.371+0800 I NETWORK
[conn40] end connection 192.168.1.106:10340 (2 connections now open)
2015-10-31T09:06:45.371+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:12601 #42 (4 connections now open)
2015-10-31T09:06:45.380+0800 I NETWORK
[conn38] end connection 192.168.1.106:10339 (2 connections now open)
2015-10-31T09:06:45.381+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:12602 #43 (4 connections now open)
2015-10-31T09:23:54.705+0800 I NETWORK
[initandlisten] connection accepted from 127.0.0.1:51697 #44 (4 connections now open)
2015-10-31T09:25:07.727+0800 I INDEX
[conn44] allocating new ns file /data/mongodb/data/test/test.ns, filling with zeroes...
2015-10-31T09:25:08.375+0800 I STORAGE
[FileAllocator] allocating new datafile /data/mongodb/data/test/test.0, filling with zeroes...
2015-10-31T09:25:08.375+0800 I STORAGE
[FileAllocator] creating directory /data/mongodb/data/test/_tmp
2015-10-31T09:25:08.378+0800 I STORAGE
[FileAllocator] done allocating datafile /data/mongodb/data/test/test.0, size: 64MB,
took 0.001 secs
2015-10-31T09:25:08.386+0800 I WRITE
[conn44] insert test.users query: { _id: ObjectId('3e'), id: 1.0 } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 659ms
2015-10-31T09:25:08.386+0800 I COMMAND
[conn44] command test.$cmd command: insert { insert: "users", documents: [ { _id: ObjectId('3e'), id: 1.0 } ], ordered: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 660ms
2015-10-31T09:26:09.405+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:13220 #45 (5 connections now open)
2015-10-31T09:36:46.873+0800 I NETWORK
[conn41] end connection 192.168.1.106:12600 (4 connections now open)
2015-10-31T09:36:46.874+0800 I NETWORK
[conn42] end connection 192.168.1.106:12601 (3 connections now open)
2015-10-31T09:36:46.875+0800 I NETWORK
[conn43] end connection 192.168.1.106:12602 (2 connections now open)
2015-10-31T09:36:46.875+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:13498 #46 (3 connections now open)
2015-10-31T09:36:46.876+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:13499 #47 (4 connections now open)
2015-10-31T09:36:46.876+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:13500 #48 (5 connections now open)
2015-10-31T09:43:52.490+0800 I INDEX
[conn45] build index on: local.people properties: { v: 1, key: { country: 1.0 }, name: "country_1", ns: "local.people" }
2015-10-31T09:43:52.490+0800 I INDEX
building index using bulk method
2015-10-31T09:43:52.491+0800 I INDEX
[conn45] build index done.
scanned 36 total records. 0 secs
2015-10-31T09:51:32.977+0800 I INDEX
[conn45] build index on: local.people properties: { v: 1, key: { country: 1.0, name: 1.0 }, name: "country_1_name_1", ns: "local.people" }
//建立复合索引
2015-10-31T09:51:32.977+0800 I INDEX
building index using bulk method
2015-10-31T09:51:32.977+0800 I INDEX
[conn45] build index done.
scanned 36 total records. 0 secs
2015-10-31T09:59:49.802+0800 I NETWORK
[conn44] end connection 127.0.0.1:51697 (4 connections now open)
2015-10-31T10:06:48.357+0800 I NETWORK
[conn47] end connection 192.168.1.106:13499 (3 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:14438 #49 (5 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:14439 #50 (5 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK
[conn48] end connection 192.168.1.106:13500 (4 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK
[conn46] end connection 192.168.1.106:13498 (4 connections now open)
2015-10-31T10:06:48.359+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:14440 #51 (5 connections now open)
2015-10-31T10:12:15.409+0800 I INDEX
[conn45] build index on: local.users properties: { v: 1, key: { Attribute: 1.0 }, name: "Attribute_1", ns: "local.users" }
2015-10-31T10:12:15.409+0800 I INDEX
building index using bulk method
2015-10-31T10:12:15.409+0800 I INDEX
[conn45] build index done.
scanned 35 total records. 0 secs
2015-10-31T10:28:27.422+0800 I COMMAND
[conn45] CMD: dropIndexes local.people
//删除索引
2015-11-25T15:25:23.248+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:23227 #76 (4 connections now open)
2015-11-25T15:25:23.247+0800 I NETWORK
[conn73] end connection 192.168.1.106:21648 (2 connections now open)
2015-11-25T15:25:36.226+0800 I NETWORK
[conn75] end connection 192.168.1.106:21659 (2 connections now open)
2015-11-25T15:25:36.227+0800 I NETWORK
[conn74] end connection 192.168.1.106:21658 (1 connection now open)
2015-11-25T15:25:36.227+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:23236 #77 (2 connections now open)
2015-11-25T15:25:36.227+0800 I NETWORK
[initandlisten] connection accepted from 192.168.1.106:23237 #78 (3 connections now open)
复制集搭建步骤
#在三台机器分别执行
mkdir -p /data/db_rs/data/rs_0
mkdir -p /data/db_rs/data/rs_1
mkdir -p /data/db_rs/data/rs_2
/data/db_rs/logs/rs_0.log
/data/db_rs/logs/rs_1.log
/data/db_rs/logs/rs_2.log
mkdir -p vi /data/db_rs/configs_rs
vi /data/db_rs/configs_rs/rs.0.conf
vi /data/db_rs/configs_rs/rs.1.conf
vi /data/db_rs/configs_rs/rs.2.conf
dbpath=/data/db_rs/data/rs_0
logpath=/data/db_rs/logs/rs_0.log
pidfilepath=/usr/local/mongodb/mongo.pid
port=27017
logappend=true
journal=true
oplogSize=2048
smallfiles=true
#auth = true
replSet=dbset
------------------------------------------------
dbpath=/data/db_rs/data/rs_1
logpath=/data/db_rs/logs/rs_1.log
pidfilepath=/usr/local/mongodb/mongo.pid
port=27017
logappend=true
journal=true
oplogSize=2048
smallfiles=true
#auth = true
replSet=dbset
--------------------------------------------------
dbpath=/data/db_rs/data/rs_2
logpath=/data/db_rs/logs/rs_2.log
pidfilepath=/usr/local/mongodb/mongo.pid
port=27017
logappend=true
journal=true
oplogSize=2048
smallfiles=true
#auth = true
replSet=dbset
--------------------------------------------------
#三台机器都启动mongodb
/usr/local/mongodb/bin/mongod --config
/data/db_rs/configs_rs/rs.0.conf
/usr/local/mongodb/bin/mongod --config
/data/db_rs/configs_rs/rs.1.conf
/usr/local/mongodb/bin/mongod --config
/data/db_rs/configs_rs/rs.2.conf
#在三台机器中选定一台作为primary,然后登录mongodb
#使用admin数据库
#定义副本集配置变量,这里的 _id:dbset 和上面配置文件中的replSet要保持一样。
config = { _id:"dbset", members:[
{_id:0,host:"192.168.1.155:27017"},
{_id:1,host:"192.168.14.221:27017"},
{_id:2,host:"192.168.14.198:27017"}]
#初始化副本集配置
执行初始化命令的这个mongodb实例将成为复制集中的primary节点
rs.initiate(config);
#mongodb默认是从主节点读写数据的,副本节点上不允许读,需要设置副本节点可以读。
repset:SECONDARY& db.getMongo().setSlaveOk();
#可以看到数据已经复制到了副本集。
repset:SECONDARY& db.testdb.find();
#查看集群节点的状态
rs.status();
"set" : "dbset",
"date" : ISODate("T03:02:49.054Z"),
"myState" : 1,
"members" : [
"_id" : 0,
"name" : "192.168.1.155:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 232543,
"optime" : Timestamp(, 497),
//oplog.rs时间戳比较
"optimeDate" : ISODate("T10:22:52Z"),
"lastHeartbeat" : ISODate("T03:02:47.763Z"),
"lastHeartbeatRecv" : ISODate("T03:02:48.864Z"),
"pingMs" : 0,
"configVersion" : 1
"_id" : 1,
"name" : "192.168.14.221:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 234774,
"optime" : Timestamp(, 497),
//oplog.rs时间戳比较
"optimeDate" : ISODate("T10:22:52Z"),
"electionTime" : Timestamp(, 1),
"electionDate" : ISODate("T11:07:25Z"),
"configVersion" : 1,
"self" : true
"_id" : 2,
"name" : "192.168.14.198:27017",
"health" : 1,
"state" : 2,
//7代表arbiter
"stateStr" : "SECONDARY",
"uptime" : 230122,
"optime" : Timestamp(, 497),
//oplog.rs时间戳比较
"optimeDate" : ISODate("T10:22:52Z"),
"lastHeartbeat" : ISODate("T03:02:47.046Z"),
"lastHeartbeatRecv" : ISODate("T03:02:48.738Z"),
"pingMs" : 1,
"configVersion" : 1
//查看配置
cfg = rs.conf()
cfg.members[0].priority = 2
cfg.members[1].priority = 1
rs.reconfig(cfg)
cfg = rs.conf()
cfg.members[0].votes = 0
rs.reconfig(cfg)
设置members[0]没选举权
&arbiter&dbset:PRIMARY& db.system.replset.find(); //rs.conf()返回的信息就是从db.system.replset里取&{&&& "_id": "dbset", &&& "version": 1, &&& "members": [&&&&&&& {&&&&&&&&&&& "_id": 0, &&&&&&&&&&& "host": "192.168.1.155:27017", &&&&&&&&&&& "arbiterOnly": false, &&&&&&&&&&& "buildIndexes": true, &&&&&&&&&&& "hidden": false, &&&&&&&&&&& "priority": 1, &&&&&&&&&&& "tags": { }, &&&&&&&&&&& "slaveDelay": 0, &&&&&&&&&&& "votes": 1&&&&&&& }, &&&&&&& {&&&&&&&&&&& "_id": 1, &&&&&&&&&&& "host": "192.168.14.221:27017", &&&&&&&&&&& "arbiterOnly": false, &&&&&&&&&&& "buildIndexes": true, &&&&&&&&&&& "hidden": false, &&&&&&&&&&& "priority": 1, &&&&&&&&&&& "tags": { }, &&&&&&&&&&& "slaveDelay": 0, &&&&&&&&&&& "votes": 1&&&&&&& }, &&&&&&& {&&&&&&&&&&& "_id": 2, &&&&&&&&&&& "host": "192.168.14.198:27017", &&&&&&&&&&& "arbiterOnly": false, &&&&&&&&&&& "buildIndexes": true, &&&&&&&&&&& "hidden": false, &&&&&&&&&&& "priority": 1, &&&&&&&&&&& "tags": { }, &&&&&&&&&&& "slaveDelay": 0, &&&&&&&&&&& "votes": 1&&&&&&& }&&& ], &&& "settings": {&&&&&&& "chainingAllowed": true, &&&&&&& "heartbeatTimeoutSecs": 10,&& //心跳超时10秒&&&&&&& "getLastErrorModes": { }, &&&&&&& "getLastErrorDefaults": {&&&&&&&&&&& "w": 1, &&&&&&&&&&& "wtimeout": 0&&&&&&& }&&& }}db.oplog.rs.find();& //复制集每个节点都有 local.oplog.rs{ "ts" : Timestamp(, 1), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }{ "ts" : Timestamp(, 1), "h" : NumberLong("-9153005"), "v" : 2, "op" : "c", "ns" : "foobar.$cmd", "o" : { "create" : "persons" } }{ "ts" : Timestamp(, 2), "h" : NumberLong("-8786835"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5adc"), "num" : 0 } }{ "ts" : Timestamp(, 3), "h" : NumberLong("6204652"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5add"), "num" : 1 } }{ "ts" : Timestamp(, 4), "h" : NumberLong("-9787062"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ade"), "num" : 2 } }{ "ts" : Timestamp(, 5), "h" : NumberLong("-4303979"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5adf"), "num" : 3 } }{ "ts" : Timestamp(, 6), "h" : NumberLong("-0631529"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 4 } }{ "ts" : Timestamp(, 7), "h" : NumberLong("9864718"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 5 } }{ "ts" : Timestamp(, 8), "h" : NumberLong("8093652"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 6 } }{ "ts" : Timestamp(, 9), "h" : NumberLong("-7353001"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 7 } }{ "ts" : Timestamp(, 10), "h" : NumberLong("3284097"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 8 } }{ "ts" : Timestamp(, 11), "h" : NumberLong("1893232"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 9 } }{ "ts" : Timestamp(, 12), "h" : NumberLong("4958110"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 10 } }{ "ts" : Timestamp(, 13), "h" : NumberLong("097497"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 11 } }{ "ts" : Timestamp(, 14), "h" : NumberLong("-5707861"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 12 } }{ "ts" : Timestamp(, 15), "h" : NumberLong("2787858"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ad"), "num" : 13 } }{ "ts" : Timestamp(, 16), "h" : NumberLong("-0269528"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5ada"), "num" : 14 } }{ "ts" : Timestamp(, 17), "h" : NumberLong("5008795"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5adb"), "num" : 15 } }{ "ts" : Timestamp(, 18), "h" : NumberLong("-0947819"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5adc"), "num" : 16 } }{ "ts" : Timestamp(, 19), "h" : NumberLong("4131316"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("5add"), "num" : 17 } }local库下才有oplog.rs& 32位系统50m 64位 5%空闲磁盘空间& 指定启动时候加--oplogSizememinvalidstartup_logsystem.indexessystem.replset设置副本可读db.getMongo().setSlaveOk();getLastError配置db.system.replset.find();&"settings": {&&&&&&& "chainingAllowed": true, &&&&&&& "heartbeatTimeoutSecs": 10,&& //心跳超时10秒&&&&&&& "getLastErrorModes": { }, &&&&&&& "getLastErrorDefaults": {&&&&&&&&&&& "w": 1, &&&&&&&&&&& "wtimeout": 0&&&&&&& }&&& }w:-1 驱动程序不会使用写关注,忽略掉所有网络或socket错误0 驱动程序不会使用写关注,只返回网络或socket错误1& 驱动程序使用写关注,但是只针对primary节点,对复制集或单实例是默认配置&1 写关注将针对复制集中的n个节点,当客户端收到这些节点的反馈信息后命令才返回给客户端继续执行wtimeout:写关注应该在多长时间内返回,如果不指定可能因为不确定因素导致程序的写操作一直阻塞节点奇数 官方推荐副本集的成员数量为奇数,最多12个副本集节点,最多7个节点参与选举。最多12个副本集节点是因为没必要一份数据复制那么多份,备份太多反而增加了网络负载和拖慢了集群性能;而最多7个节点参与选举是因为内部选举机制节点数量太多就会导致1分钟内还选不出主节点,凡事只要适当就好相关文章http://www.lanceyan.com/tech/mongodb/mongodb_repset1.htmlhttp://www.lanceyan.com/tech/arch/mongodb_shard1.htmlhttp://www.lanceyan.com/tech/mongodb_repset2.htmlhttp://blog.nosqlfan.com/html/4139.html&& (Bully 算法)心跳 综上所述,整个集群需要保持一定的通信才能知道哪些节点活着哪些节点挂掉。mongodb节点会向副本集中的其他节点每两秒就会发送一次pings包,如果其他节点在10秒钟之内没有返回就标示为不能访问。每个节点内部都会维护一个状态映射表,表明当前每个节点是什么角色、日志时间戳等关键信息。如果是主节点,除了维护映射表外还需要检查自己能否和集群中内大部分节点通讯,如果不能则把自己降级为secondary只读节点。同步副本集同步分为初始化同步和keep复制。初始化同步指全量从主节点同步数据,如果主节点数据量比较大同步时间会比较长。而keep复制指初始化同步过后,节点之间的实时同步一般是增量同步。初始化同步不只是在第一次才会被触发,有以下两种情况会触发:secondary第一次加入,这个是肯定的。secondary落后的数据量超过了oplog的大小,这样也会被全量复制。那什么是oplog的大小?前面说过oplog保存了数据的操作记录,secondary复制oplog并把里面的操作在secondary执行一遍。但是oplog也是mongodb的一个集合,保存在local.oplog.rs里,但是这个oplog是一个capped collection也就是固定大小的集合,新数据加入超过集合的大小会覆盖。所以这里需要注意,跨IDC的复制要设置合适的oplogSize,避免在生产环境经常产生全量复制。oplogSize 可以通过&oplogSize设置大小,对于linux 和windows 64位,oplog size默认为剩余磁盘空间的5%。同步也并非只能从主节点同步,假设集群中3个节点,节点1是主节点在IDC1,节点2、节点3在IDC2,初始化节点2、节点3会从节点1同步数据。后面节点2、节点3会使用就近原则从当前IDC的副本集中进行复制,只要有一个节点从IDC1的节点1复制数据。设置同步还要注意以下几点:secondary不会从delayed和hidden成员上复制数据。只要是需要同步,两个成员的buildindexes必须要相同无论是否是true和false。buildindexes主要用来设置是否这个节点的数据用于查询,默认为true。如果同步操作30秒都没有反应,则会重新选择一个节点进行同步。
增删节点: 后台有两个deamon做 chunk的split , 和 shard之前的balance When removing a shard, the balancer migrates all chunks from a shard to other shards. After migrating all data and updating the meta data, you can safely remove the shard (这里的意思,必须要等搬迁完毕,不然数据就会丢失)这个是因为片键的设置,文章中是为了做demo用的设置,这是不太好的方式,最好不要用自增id做片键,因为会出现数据热点,可以选用objectid相关文章http://www.lanceyan.com/tech/arch/mongodb_shard1.html
mongodb事务机制和数据安全
数据丢失常见分析
主要两个参数
w: 0 | 1 | n | majority | tag&
wtimeout: millis 毫秒
不支持join意味着肆无忌惮的横向扩展能力
所有业务使用ssd,raid0
oplog并行同步
1、建立相关目录和文件
mkdir -p /data/replset0/data/rs0
mkdir -p /data/replset0/log
mkdir -p /data/replset0/config
touch /data/replset0/config/rs0.conf
/data/replset0/log/rs0.log
mkdir -p /data/replset1/data/rs1
mkdir -p /data/replset1/log
mkdir -p /data/replset1/config
touch /data/replset1/config/rs1.conf
/data/replset1/log/rs1.log
mkdir -p /data/replset2/data/rs2
mkdir -p /data/replset2/log
mkdir -p /data/replset2/config
touch /data/replset2/config/rs2.conf
/data/replset2/log/rs2.log
mkdir -p /data/db_config/data/config0
mkdir -p /data/db_config/log/
mkdir -p /data/db_config/config/
touch /data/db_config/log/config0.log
touch /data/db_config/config/cfgserver0.conf
mkdir -p /data/replset0/data/rs0
mkdir -p /data/replset0/log
mkdir -p /data/replset0/config
touch /data/replset0/config/rs0.conf
/data/replset0/log/rs0.log
mkdir -p /data/replset1/data/rs1
mkdir -p /data/replset1/log
mkdir -p /data/replset1/config
touch /data/replset1/config/rs1.conf
/data/replset1/log/rs1.log
mkdir -p /data/replset2/data/rs2
mkdir -p /data/replset2/log
mkdir -p /data/replset2/config
touch /data/replset2/config/rs2.conf
/data/replset2/log/rs2.log
mkdir -p /data/db_config/data/config1
mkdir -p /data/db_config/log/
mkdir -p /data/db_config/config/
touch /data/db_config/log/config1.log
touch /data/db_config/config/cfgserver1.conf
mkdir -p /data/replset0/data/rs0
mkdir -p /data/replset0/log
mkdir -p /data/replset0/config
touch /data/replset0/config/rs0.conf
/data/replset0/log/rs0.log
mkdir -p /data/replset1/data/rs1
mkdir -p /data/replset1/log
mkdir -p /data/replset1/config
touch /data/replset1/config/rs1.conf
/data/replset1/log/rs1.log
mkdir -p /data/replset2/data/rs2
mkdir -p /data/replset2/log
mkdir -p /data/replset2/config
touch /data/replset2/config/rs2.conf
/data/replset2/log/rs2.log
mkdir -p /data/db_config/data/config2
mkdir -p /data/db_config/log/
mkdir -p /data/db_config/config/
touch /data/db_config/log/config2.log
touch /data/db_config/config/cfgserver2.conf
2、配置复制集配置文件
vi /data/replset0/config/rs0.conf
journal=true
replSet=rsshard0
dbpath = /data/replset0/data/rs0
shardsvr = true
oplogSize = 100
pidfilepath = /usr/local/mongodb/mongodb0.pid
logpath = /data/replset0/log/rs0.log
logappend = true
profile = 1
slowms = 5
fork = true
vi /data/replset1/config/rs1.conf
journal=true
replSet=rsshard1
dbpath = /data/replset1/data/rs1
shardsvr = true
oplogSize = 100
pidfilepath =/usr/local/mongodb/mongodb1.pid
logpath = /data/replset1/log/rs1.log
logappend = true
profile = 1
slowms = 5
fork = true
/data/replset2/config/rs2.conf
journal=true
replSet=rsshard2
dbpath = /data/replset2/data/rs2
shardsvr = true
oplogSize = 100
pidfilepath =/usr/local/mongodb/mongodb2.pid
logpath = /data/replset2/log/rs2.log
logappend = true
profile = 1
slowms = 5
fork = true
vi /data/replset0/config/rs0.conf
journal=true
replSet=rsshard0
dbpath = /data/replset0/data/rs0
shardsvr = true
oplogSize = 100
pidfilepath = /usr/local/mongodb/mongodb0.pid
logpath = /data/replset0/log/rs0.log
logappend = true
profile = 1
slowms = 5
fork = true
vi /data/replset1/config/rs1.conf
journal=true
replSet=rsshard1
dbpath = /data/replset1/data/rs1
shardsvr = true
oplogSize = 100
pidfilepath =/usr/local/mongodb/mongodb1.pid
logpath = /data/replset1/log/rs1.log
logappend = true
profile = 1
slowms = 5
fork = true
/data/replset2/config/rs2.conf
journal=true
replSet=rsshard2
dbpath = /data/replset2/data/rs2
shardsvr = true
oplogSize = 100
pidfilepath =/usr/local/mongodb/mongodb2.pid
logpath = /data/replset2/log/rs2.log
logappend = true
profile = 1
slowms = 5
fork = true
vi /data/replset0/config/rs0.conf
journal=true
replSet=rsshard0
dbpath = /data/replset0/data/rs0
shardsvr = true
oplogSize = 100
pidfilepath = /usr/local/mongodb/mongodb0.pid
logpath = /data/replset0/log/rs0.log
logappend = true
profile = 1
slowms = 5
fork = true
vi /data/replset1/config/rs1.conf
journal=true
replSet=rsshard1
dbpath = /data/replset1/data/rs1
shardsvr = true
oplogSize = 100
pidfilepath =/usr/local/mongodb/mongodb1.pid
logpath = /data/replset1/log/rs1.log
logappend = true
profile = 1
slowms = 5
fork = true
/data/replset2/config/rs2.conf
journal=true
replSet=rsshard2
dbpath = /data/replset2/data/rs2
shardsvr = true
oplogSize = 100
pidfilepath =/usr/local/mongodb/mongodb2.pid
logpath = /data/replset2/log/rs2.log
logappend = true
profile = 1
slowms = 5
fork = true
3、启动复制集
#三台机器都启动mongodb
/usr/local/mongodb/bin/mongod --config
/data/replset0/config/rs0.conf
/usr/local/mongodb/bin/mongod --config
/data/replset1/config/rs1.conf
/usr/local/mongodb/bin/mongod --config
/data/replset2/config/rs2.conf
/usr/local/mongodb/bin/mongod --config
/data/replset0/config/rs0.conf
/usr/local/mongodb/bin/mongod --config
/data/replset1/config/rs1.conf
/usr/local/mongodb/bin/mongod --config
/data/replset2/config/rs2.conf
/usr/local/mongodb/bin/mongod --config
/data/replset0/config/rs0.conf
/usr/local/mongodb/bin/mongod --config
/data/replset1/config/rs1.conf
/usr/local/mongodb/bin/mongod --config
/data/replset2/config/rs2.conf
4、设置复制集
mongo --port 4000
config = { _id:"rsshard0", members:[
{_id:0,host:"192.168.1.155:4000"},
{_id:1,host:"192.168.14.221:4000"},
{_id:2,host:"192.168.14.198:4000"}]
rs.initiate(config);
mongo --port 4001
config = { _id:"rsshard1", members:[
{_id:0,host:"192.168.1.155:4001"},
{_id:1,host:"192.168.14.221:4001"},
{_id:2,host:"192.168.14.198:4001"}]
rs.initiate(config);
mongo --port 4002
config = { _id:"rsshard2", members:[
{_id:0,host:"192.168.1.155:4002"},
{_id:1,host:"192.168.14.221:4002"},
{_id:2,host:"192.168.14.198:4002"}]
rs.initiate(config);
cfg = rs.conf()
cfg.members[0].priority = 2
cfg.members[1].priority = 1
cfg.members[2].priority = 1
rs.reconfig(cfg)
cfg = rs.conf()
cfg.members[0].priority = 1
cfg.members[1].priority = 2
cfg.members[2].priority = 1
rs.reconfig(cfg)
cfg = rs.conf()
cfg.members[0].priority = 1
cfg.members[1].priority = 1
cfg.members[2].priority = 2
rs.reconfig(cfg)
5、配置config服务器
vi /data/db_config/config/cfgserver0.conf
journal=true
pidfilepath = /data/db_config/config/mongodb.pid
dbpath = /data/db_config/data/config0
directoryperdb = true
configsvr = true
port = 5000
logpath =/data/db_config/log/config0.log
logappend = true
fork = true
vi /data/db_config/config/cfgserver1.conf
journal=true
pidfilepath = /data/db_config/config/mongodb.pid
dbpath = /data/db_config/data/config1
directoryperdb = true
configsvr = true
port = 5000
logpath =/data/db_config/log/config1.log
logappend = true
fork = true
vi /data/db_config/config/cfgserver2.conf
journal=true
pidfilepath = /data/db_config/config/mongodb.pid
dbpath = /data/db_config/data/config2
directoryperdb = true
configsvr = true
port = 5000
logpath =/data/db_config/log/config2.log
logappend = true
fork = true
/usr/local/mongodb/bin/mongod --config
/data/db_config/config/cfgserver0.conf
/usr/local/mongodb/bin/mongod --config
/data/db_config/config/cfgserver1.conf
/usr/local/mongodb/bin/mongod --config
/data/db_config/config/cfgserver2.conf
6、配置mongos路由服务器 三台机器都要执行
mkdir -p /data/mongos/log/
touch /data/mongos/log/mongos.log
touch /data/mongos/mongos.conf
vi /data/mongos/mongos.conf
#configdb = 192.168.1.155:5000,192.168.14.221:5000,192.168.14.198:5000
configdb = 192.168.1.155:5000
//最后还是只能使用一个config server
port = 6000
chunkSize = 1
logpath =/data/mongos/log/mongos.log
logappend = true
fork = true
--config /data/mongos/mongos.conf
7、添加分片
mongo 192.168.1.155:6000
//连接到第一台
#添加分片,不能添加arbiter节点
sh.addShard("rsshard0/192.168.1.155:.14.221:.14.198:4000")
sh.addShard("rsshard1/192.168.1.155:.14.221:.14.198:4001")
sh.addShard("rsshard2/192.168.1.155:.14.221:.14.198:4002")
sh.status();
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("565eac6d8e75f6a7d3e6e65e")
"_id" : "rsshard0",
"host" : "rsshard0/192.168.1.155:.14.198:.14.221:4000" }
"_id" : "rsshard1",
"host" : "rsshard1/192.168.1.155:.14.198:.14.221:4001" }
"_id" : "rsshard2",
"host" : "rsshard2/192.168.1.155:.14.198:.14.221:4002" }
Currently enabled:
Currently running:
Failed balancer rounds in last 5 attempts:
Migration Results for the last 24 hours:
No recent migrations
databases:
"_id" : "admin",
"partitioned" : false,
"primary" : "config" }
#声明库和表要分片
mongos& use admin
mongos& db.runCommand({enablesharding:"testdb"})
mongos& db.runCommand( { shardcollection : "testdb.books", key : { id : 1 } } )
use testdb
mongos& for (var i = 1; i &= 20000; i++){db.books.save({id:i,name:"ttbook",sex:"male",age:27,value:"test"})}
#查看分片统计
db.books}

我要回帖

更多关于 8g内存是多少mb 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信