mongodb集群部署实践

mac2024-05-15  31

一。复制集群部署

机器:134.32.213.129/130/131

端口:270031.主机部署准备 cd /home/lifh/mongodb/mongodb4 tar -zxf ../soft/mongodb-linux-x86_64-rhel62-4.0.12.tgz  mv mongodb-linux-x86_64-rhel62-4.0.12/* . mkdir conf  data  keyfile  log  run mkdir data/config  data/data_node vi conf/mongo_node.conf net:   port: 27003   bindIp:  0.0.0.0

storage:     engine: wiredTiger     dbPath: /home/lifh/mongodb/mongodb4/data/data_node     journal:         enabled: true systemLog:     destination: file     logAppend: true     path: /home/lifh/mongodb/mongodb4/log/mongod_node.log

operationProfiling:   slowOpThresholdMs: 10000

replication:     oplogSizeMB: 10240     replSetName: replica19

processManagement:     fork: true     pidFilePath: /home/lifh/mongodb/mongodb4/run/mongodb.pid

security:     authorization: "enabled"     clusterAuthMode: keyFile     keyFile: /home/lifh/mongodb/mongodb4/keyfile/mongo.key

tar -zcf mongodb_4_green.tar.gz mongodb/mongodb4 scp mongodb_4_green.tar.gz  lifh@134.32.213.130/131

2.复制集群启动  分别启动134.32.213.129/130/131 主机的复制集群实例; bin/mongod -f conf/mongo_node.conf

3.数据库中集群和管理账号的配置 bin/mongo --host 127.0.0.1  --port 27003

config={_id:'replica19',members:[{_id:0,host:'134.32.213.129:27003'},{_id:1,host:'134.32.213.130:27003'},{_id:2,host:'134.32.213.131:27003'}]} rs.initiate(config) { "ok" : 1 } replica19:SECONDARY>  replica19:PRIMARY>  use admin db.createUser({user:"sys_admin",pwd:"xxxxxx",roles:[{role:"root",db:"admin"}]}) Successfully added user

db.auth('sys_admin','xxxxxx')  rs.status() quit()4.检查集群状态和配置  bin/mongo 127.0.0.1:27003/admin -u sys_admin -p xxxxxx  rs.status() rs.printSlaveReplicationInfo()  查询从库信息二。分片集群部署1.主机部署准备

机器:134.32.213.129/130/131

端口:mongos:27006 config server:27007 shard1:27017 shard2:27018 shard3:27019 cd /home/lifh/mongodb/shard_mongo tar -zxf ../soft/mongodb-linux-x86_64-rhel62-4.0.12.tgz  mv mongodb-linux-x86_64-rhel62-4.0.12/* . mkdir conf  data  keyfile  log  run mkdir data/data_config  data/data_shard mkdir data/data_shard/shard1  data/data_shard/shard2  data/data_shard/shard3 配置分片实例的配置文件,通过调整shard1关键字和端口进行配置其他两个分片的配置文件; conf/mongo_shard[1-3].conf net:   port: 27017   bindIp:  0.0.0.0

storage:     engine: wiredTiger     dbPath: /home/lifh/mongodb/shard_mongo/data/data_shard/shard1     journal:         enabled: true systemLog:     destination: file     logAppend: true     path: /home/lifh/mongodb/shard_mongo/log/mongod_shard1.log

operationProfiling:   slowOpThresholdMs: 10000

sharding:     clusterRole: shardsvr

replication:     oplogSizeMB: 40960     replSetName: shard1

processManagement:     fork: true     pidFilePath: /home/lifh/mongodb/shard_mongo/run/mongodb_shard1.pid

#security: #    authorization: "enabled" #    clusterAuthMode: keyFile #    keyFile: /home/lifh/mongodb/shard_mongo/keyfile/mongoshard.key 配置配置集群的配置文件 conf/mongo_config.conf net:   port: 27007   bindIp:  0.0.0.0

storage:     engine: wiredTiger     dbPath: /home/lifh/mongodb/shard_mongo/data/data_config     journal:         enabled: true systemLog:     destination: file     logAppend: true     path: /home/lifh/mongodb/shard_mongo/log/mongod_config.log

operationProfiling:   slowOpThresholdMs: 10000

sharding:     clusterRole: configsvr

replication:     oplogSizeMB: 40960     replSetName: configserver

processManagement:     fork: true     pidFilePath: /home/lifh/mongodb/shard_mongo/run/mongodb_config.pid

#security: #    authorization: "enabled" #    clusterAuthMode: keyFile #    keyFile: /home/lifh/mongodb/shard_mongo/keyfile/mongoshard.key

然后把配置目录shard_mongo打包,并复制到134.32.213.130/131两台机器;2.分片集群启动 bin/mongod  -f conf/mongo_shard1.conf bin/mongod  -f conf/mongo_shard2.conf bin/mongod  -f conf/mongo_shard3.conf bin/mongod  -f conf/mongo_config.conf 在每台机器上执行上面的4个命令,启动配置节点和分片节点3.数据库中复制集群的配置 注意要用127.0.0.1否则会报command replSetInitiate requires authentication bin/mongo --host 127.0.0.1 --port 27017 config={_id:'shard1',members:[{_id:0,host:'134.32.213.129:27017'},{_id:1,host:'134.32.213.130:27017'},{_id:2,host:'134.32.213.131:27017'}]} rs.initiate(config) bin/mongo --host 127.0.0.1 --port 27018 config={_id:'shard2',members:[{_id:0,host:'134.32.213.129:27018'},{_id:1,host:'134.32.213.130:27018'},{_id:2,host:'134.32.213.131:27018'}]} rs.initiate(config) bin/mongo --host 127.0.0.1 --port 27019 config={_id:'shard3',members:[{_id:0,host:'134.32.213.129:27019'},{_id:1,host:'134.32.213.130:27019'},{_id:2,host:'134.32.213.131:27019'}]} rs.initiate(config) bin/mongo --host 127.0.0.1 --port 27007 config={_id:'configserver',members:[{_id:0,host:'134.32.213.129:27007'},{_id:1,host:'134.32.213.130:27007'},{_id:2,host:'134.32.213.131:27007'}]} rs.initiate(config)

4.配置路由节点 conf/mongos_route.conf net:   port: 27006   bindIp:  0.0.0.0

systemLog:     destination: file     logAppend: true     path: /home/lifh/mongodb/shard_mongo/log/mongos_route.log

sharding:     configDB: configserver/134.32.213.129:27007,134.32.213.130:27007,134.32.213.131:27007

processManagement:     fork: true     pidFilePath: /home/lifh/mongodb/shard_mongo/run/mongos_route.pid

5.启动路由实例,并配置分片集群和管理用户的创建 bin/mongos -f conf/mongos_route.conf  & bin/mongo --host 127.0.0.1 --port 27006 sh.addShard("shard1/134.32.213.129:27017,134.32.213.130:27017,134.32.213.131:27017") sh.addShard("shard2/134.32.213.129:27018,134.32.213.130:27018,134.32.213.131:27018") sh.addShard("shard3/134.32.213.129:27019,134.32.213.130:27019,134.32.213.131:27019")

use admin db.createUser({user:"sys_admin",pwd:"xxxxxxx",roles:[{role:"root",db:"admin"}]})

db.auth('sys_admin','xxxxxxx')

6.检查分片集群状态和配置 bin/mongo --host 127.0.0.1 --port 27006 bin/mongo 127.0.0.1:27006/admin -u sys_admin -p xxxxxxx sh.status()

最新回复(0)