import requests import time from threading import Thread from multiprocessing import Process
(2)定义CPU密集的计算函数
1 2 3 4 5 6 7
defcount(x, y): # 使程序完成50万计算 c = 0 while c < 500000: c += 1 x += x y += y
(3)定义IO密集的文件读写函数
1 2 3 4 5 6 7 8 9 10 11 12
defwrite(name=0): # name: 防止并发写同一个文件; # concurrence: 保证在不同的并发数下磁盘写入字节数相同 f = open("test-{}.txt".format(name), "w") for x in range(5000000): f.write("testwrite\n") f.close()
defread(name=0): f = open("test-{}.txt".format(name), "r") lines = f.readlines() f.close()
(4)定义网络请求函数
1 2 3 4 5 6 7 8 9 10
_head = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36'} url = "http://www.tieba.com" defhttp_request(): try: webPage = requests.get(url, headers=_head) html = webPage.text return {"context": html} except Exception as e: return {"error": e}
单线程测试
(5)测试线性执行IO密集操作、CPU密集操作所需时间、网络请求密集型操作所需时间
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
# CPU密集操作 t = time.time() for x in range(10): count(1, 1) print("Line cpu", time.time() - t)
# IO密集操作 t = time.time() for x in range(10): write() read() print("Line IO", time.time() - t)
# 网络请求密集型操作 t = time.time() for x in range(10): http_request() print("Line Http Request", time.time() - t)
输出
CPU密集
95.6059999466
91.57099986076355
92.52800011634827
99.96799993515015
IO密集
24.25
21.76699995994568
21.769999980926514
22.060999870300293
网络请求密集型
4.519999980926514
8.563999891281128
4.371000051498413
14.671000003814697
多线程测试
(6)测试多线程并发执行CPU密集操作所需时间
1 2 3 4 5 6 7 8 9
# 列表生成 threads = [Thread(target=count, args=(1,1)) for _ in range(10)] start = time.time() for t in threads: t.start() for t in threads: t.join()
print(time.time() - start)
output
99.9240000248
101.26400017738342
102.32200002670288
(7)测试多线程并发执行IO密集操作所需时间
1 2 3 4 5 6 7 8 9 10 11
defio(name): write(name) read(name)
start = time.time() threads = [Thread(target=io, args=(i,)) for i in range(10)] for t in threads: t.start() for t in threads: t.join() print(time.time() - start)
Output
84.7796590328
108.204546928
(8)测试多线程并发执行网络密集操作所需时间
1 2 3 4 5 6 7 8
threads = [Thread(target=http_request) for _ in range(10)] start = time.time() for t in threads: t.start() for t in threads: t.join()
print("Thread Http Request", time.time() - t)
Output
0.7419998645782471
0.3839998245239258
0.3900001049041748
多进程测试
(9)测试多进程并发执行CPU密集操作所需时间
1 2 3 4 5 6 7
p_list = [Process(target=count, args=(1,1)) for _ in range(10)] start = time.time() for p in p_list: p.start() for p in p_list: p.join() print("Multiprocess cpu", time.time() - start)
Output
54.342000007629395
53.437999963760376
(10)测试多进程并发执行IO密集型操作
1 2 3 4 5 6 7 8
p_list = [Process(target=io, args=(i,)) for i in range(10)] start = time.time() for p in p_list: p.start() for p in p_list: p.join()
```python defwrite(name=0): # name: 防止并发写同一个文件; # concurrence: 保证在不同的并发数下磁盘写入字节数相同 f = open("test-{}.txt".format(name), "w") for x in range(5000000): f.write("testwrite\n") f.close()
write函数中有5000000个循环,这妥妥的也是cpu密集型,我们把它改造一下:
1 2 3 4 5 6
data = "\n".join(["testwrite"for i in range(5000)]) defwrite(name=None): f = open("test-{}.txt".format(name), "w") for x in range(1000): f.write("testwrite\n") f.close()