由于 MongoDB 约束性比较小,所以有时候数据库中会有很多重复的数据,特别是爬虫的数据。文档数量一旦达到一定数据量级,为了保证新插入的数据不重复,如果每次使用 update 操作感觉有点慢,倒不如一次性 insert 然后最后在使用 pipline 管道操作去重重复数据,感觉会快些,但是我没有测试过,有兴趣的童鞋可以做个测试比较一下。
数据库存在重复数据,如下:
/* 1 */ { "_id" : ObjectId("5dbaf3939642fb9adcad453c"), "username" : "王五", "area" : "上海" } /* 2 */ { "_id" : ObjectId("5dbaf3a19642fb9adcad4549"), "username" : "赵四", "area" : "北京" } /* 3 */ { "_id" : ObjectId("5dbaf3b89642fb9adcad4556"), "username" : "马六", "area" : "北京" } /* 4 */ { "_id" : ObjectId("5dbaf3b89642fb9adcad4556"), "username" : "马六", "area" : "北京" }MongoDB Bash 命令:
db.getCollection('users').aggregate([ { $group: { _id: {username: '$username', username: '$area'},count: {$sum: 1},dups: {$addToSet: '$_id'}} }, { $match: {count: {$gt: 1}} } ], {allowDiskUse: true}).forEach(function(doc){ doc.dups.shift(); db['users'].remove({_id: {$in: doc.dups}}); })Python3 代码:
from pymongo import MongoClient, DeleteOne MONGODB_URL = 'mongodb://localhost:27017/users' col = MongoClient(MONGODB_URL)['test']['users'] pipeline = [ { '$group': { '_id': {'username': '$username', 'area': 'area'}, 'count': {'$sum': 1}, 'dups': {'$addToSet': '$_id'} } }, { '$match': { 'count': {'$gt': 1} } } ] map_id = map(lambda doc: doc['dups'][1:], col.aggregate(pipeline=pipeline, allowDiskUse=True)) list_id = [item for sublist in map_id for item in sublist] print(col.bulk_write(list(map(lambda _id: DeleteOne({'_id': _id}), list_id))).bulk_api_result)http://forum.foxera.com/mongodb/topic/967/mongodb%E5%A6%82%E4%BD%95%E5%B0%86%E9%87%8D%E5%A4%8D%E7%9A%84%E6%95%B0%E6%8D%AE%E5%88%A0%E9%99%A4