首页 > tornado的mongo驱动选择,pymongo,motor,asyncmongo还是其他?

tornado的mongo驱动选择,pymongo,motor,asyncmongo还是其他?

背景

我用apache的ab test在公司的两台虚拟机上面测试,发现用pymongo的速度最快,asyncmongo其次,最后才是motor库.囧

機器配置

測試工具

ab test

测试用例

我这里只贴了asyncmongo的用例,其他两个的代码结构类似,但跟其他业务结合得比较紧,所以就不贴了.大概的用例描述:客户端发起一个json格式的post请求,tornado这边根据player_id跟mongo要数据,只是读请求,不存在锁的问题.
删了一些敏感的信息.

#!/usr/bin/env python
# encoding: utf-8

import asyncmongo
import tornado.web
from tornado import web
import tornado.ioloop
from tornado.ioloop import IOLoop
from tornado.httpserver import HTTPServer

class RankHandler(tornado.web.RequestHandler):
    def __init__(self, application, request, **kwargs):
        super(RankHandler, self).__init__(application, request, **kwargs)
        self.set_header('Content-Type', 'application/json')

    @property
    def db(self):
        return self.application.db

    @tornado.web.asynchronous
    def post(self):
        r = {}
        ## decode msg body
        try:
            d = tornado.escape.json_decode(self.request.body)
        except ValueError, e:
            self.log.error('decode track data error. e=%s' % e)
            r['status_code'] = 500
            r['status_txt'] = 'decode json error'
            self.write(tornado.escape.json_encode(r))
            self.finish()
            return

        event = d.get('event')
        if not event:
            self.log.error('track args missing arg event.')
            r['status_code'] = 500
            r['status_txt'] = 'missing_arg_event'
            self.write(tornado.escape.json_encode(r))
            self.finish()
            return

        event_data = d.get('data')
        if event_data and not isinstance(event_data, dict):
            self.log.error('track args bad arg data.')
            r['status_code'] = 500
            r['status_txt'] = 'bad_arg_data'
            self.write(tornado.escape.json_encode(r))
            self.finish()
            return

        if(event == "u_add"):
            pass
        elif(event == "u_group"):
            pass
        elif(event == "u_update"):
            pass
        elif(event == "u_get_point"):
            self.db.ranking_list.find_one({"_id": event_data["player_id"]},callback=self._on_response)

    def _on_response(self, response, error):
        r = {}
        if error:
            raise tornado.web.HTTPError(500)
        result = {"data": {"_id": response['_id'], "rank_point": response["rank_point"]}}
        r.update(result)
        if not r.get('status_code', None):
            r['status_code'] = 200
            r['status_txt'] = 'OK'

        self.write(tornado.escape.json_encode(r))
        self.finish()
        return



class Application(web.Application):
    def __init__(self):
        """
        """
        handlers = [
            (r"/api/xxx", RankHandler),
        ]

        settings = dict(
            debug=True,
            autoescape=None,
        )

        super(Application, self).__init__(handlers, **settings)
        self.db = asyncmongo.Client(pool_id='mydb', host='0.0.0.0', port=27017, maxcached=10, maxconnections=20000, dbname='xxx')



def main():
    http_server = HTTPServer(Application(), xheaders=True)

    http_server.bind(8880, '127.0.0.1')
    http_server.start()

    IOLoop.instance().start()

if __name__ == "__main__":
    main()

测试结果

➜  test git:(master) ✗ ab -n10000 -c3000 -p data-get-user-rank_point.xml -T'application/json' 'http://192.168.0.201:8880/api/xxx'
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.0.201 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        TornadoServer/3.1.1
Server Hostname:        192.168.0.201
Server Port:            8880

Document Path:          /api/xxx
Document Length:        80 bytes

Concurrency Level:      3000
Time taken for tests:   23.551 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2170000 bytes
Total body sent:        1990000
HTML transferred:       800000 bytes
Requests per second:    424.61 [#/sec] (mean)
Time per request:       7065.317 [ms] (mean)
Time per request:       2.355 [ms] (mean, across all concurrent requests)
Transfer rate:          89.98 [Kbytes/sec] received
                        82.52 kb/s sent
                        172.50 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1 1806 2222.9   1061   10825
Processing:   265 1130 2042.6    539   20975
Waiting:      255 1040 2018.9    515   20972
Total:        282 2936 2824.0   2930   20976

Percentage of the requests served within a certain time (ms)
  50%   2930
  66%   3402
  75%   3526
  80%   3592
  90%   6670
  95%   6823
  98%   9961
  99%  15001
 100%  20976 (longest request)
(q2_rank)➜  test git:(master) ✗ ab -n10000 -c3000 -p data-get-user-rank_point.xml -T'application/json' 'http://192.168.0.201:8880/api/xxx'
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.0.201 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        TornadoServer/3.1.1
Server Hostname:        192.168.0.201
Server Port:            8880

Document Path:          /api/xxx
Document Length:        80 bytes

Concurrency Level:      3000
Time taken for tests:   24.629 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2170000 bytes
Total body sent:        1990000
HTML transferred:       800000 bytes
Requests per second:    406.02 [#/sec] (mean)
Time per request:       7388.749 [ms] (mean)
Time per request:       2.463 [ms] (mean, across all concurrent requests)
Transfer rate:          86.04 [Kbytes/sec] received
                        78.90 kb/s sent
                        164.95 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  412 794.5     17    6205
Processing:  1024 6286 1793.2   7088   10475
Waiting:      836 6256 1843.9   7083   10468
Total:       1032 6698 1894.1   7199   14014

Percentage of the requests served within a certain time (ms)
  50%   7199
  66%   7700
  75%   7825
  80%   7875
  90%   8244
  95%   9161
  98%  10366
  99%  10763
 100%  14014 (longest request)
  98%   6677

結果分析

理論上來說,異步的mongo應該可以處理更多的併發連接,但實際測試來看,併發連接數相差不大.我想到的原因:
- 测试机子太挫
- 測試的數據量不夠大,如果增大返回的數據大小,或者延長查詢的時間,這樣理論上motor性能會更好看點
- 我查看了mongod的日志,发现异步的驱动,每次都会跟mongo新生成一个连接,频繁的连接建立与删除,造成了性能上的损耗?而pymongo则从头到尾都是一个连接.

最后


http://www.cnblogs.com/restran/p/4937673.html
这里也有一个测试,不过是motor性能比较好。我觉得这个测试可以把逻辑简化一下,仅仅对比数据库操作。


个人感觉损耗在建立连接上了。
pymongo也存在pool,只是你的测试可能只建立一个连接,可以看看max_pool_size这个参数
asyncmongo会在开始时创建链接池,可以参考:asyncmongo / asyncmongo / pool.py (我没怎么仔细看)。
需要注意的是,asyncmongo会在超过最大连接数时报错,motor印象中会阻塞,掉过这个坑里。

如果你的mongodb数据够多,每次读取的数据不同,每个数据大点的话,性能应该会体现出来。
个人想法,可以做更多测试看看。

【热门文章】
【热门文章】