上一篇在分析lowmemorykiller的时候遇到了一个特殊的调用,那就是epoll。epoll属于多路复用的机制之一。之前了解得不是特别深入,所以这一篇做一下深入了解,当然相关概念网上已经很多了,这里主要是通过写代码来进行了解,毕竟再多的概念不如使用一次来得直接。所以这篇的前面部分摘录自网络,在讲解基础的相关知识后,会通过Android NDK,编写一个socket程序,来对比select和epoll两者的性能。
概述
I/O多路复用就通过一种机制,可以监视多个描述符,一旦某个描述符就绪(一般是读就绪或者写就绪),能够通知程序进行相应的读写操作。select,poll,epoll都是IO多路复用的机制。但select,poll,epoll本质上都是同步I/O,因为他们都需要在读写事件就绪后自己负责进行读写,也就是说这个读写过程是阻塞的,而异步I/O则无需自己负责进行读写,异步I/O的实现会负责把数据从内核拷贝到用户空间。下面我们分别谈谈select和epoll的区别:
select
原理概述
select 的核心功能是调用tcp文件系统的poll函数,不停的查询,如果没有想要的数据,主动执行一次调度(防止一直占用cpu),直到有一个连接有想要的消息为止。从这里可以看出select的执行方式基本就是不停的调用poll,直到有需要的消息为止。
优点
- select的可移植性更好,在某些Unix系统上不支持poll()。
- select对于超时值提供了更好的精度:微秒,而poll是毫秒。
缺点
- 每次调用select,都需要把fd集合从用户态拷贝到内核态,这个开销在fd很多时会很大;
- 同时每次调用select都需要在内核遍历传递进来的所有fd,这个开销在fd很多时也很大;
- select支持的文件描述符数量太小了,默认是1024。但是在Android系统上的Linux内核,默认是128
epoll
原理概述
epoll同样只告知那些就绪的文件描述符,而且当我们调用epoll_wait()获得就绪文件描述符时, 返回的不是实际的描述符,而是一个代表就绪描述符数量的值,你只需要去epoll指定的一 个数组中依次取得相应数量的文件描述符即可,这里也使用了内存映射技术,这 样便彻底省掉了这些文件描述符在系统调用时复制的开销。
优点
- 支持一个进程打开大数目的socket描述符:相比select则没有对FD数量的限制,它所支持的FD上限是最大可以打开文件的数目,这个数字一般远大于2048,举个例子,在1GB内存的机器上大约是10万左右,具体数目可以cat /proc/sys/fs/file-max察看,一般来说这个数目和系统内存关系很大。
- IO效率不随FD数目增加而线性下降:epoll不存在这个问题,它只会对”活跃”的socket进行操作,这是因为在内核实现中epoll是根据每个fd上面的callback函数实现的。那么,只有”活跃”的socket才会主动的去调用 callback函数,其他idle状态socket则不会,在这点上,epoll实现了一个”伪”AIO,因为这时候推动力在os内核。在一些 benchmark中,如果所有的socket基本上都是活跃的:比如一个高速LAN环境,epoll并不比select/poll有什么效率,相反,如果过多使用epoll_ctl,效率相比还有稍微的下降。但是一旦使用idle connections模拟WAN环境,epoll的效率就远在select/poll之上了。
- 使用mmap加速内核与用户空间的消息传递:这点实际上涉及到epoll的具体实现了。无论是select,poll还是epoll都需要内核把FD消息通知给用户空间,如何避免不必要的内存拷贝就很重要,在这点上,epoll是通过内核于用户空间mmap同一块内存实现的。
缺点
- 需要在2.6以上的内核才支持epoll
- 跨平台性较差,select可以在Linux、Windows和Apple平台下适用。但是epoll只能在Linux下适用。
epoll和select性能比较
下面通过Android NDK编写socket程序,进行epoll和select性能比较。在此之前需要先获取《(原创)在JNI(Native)层调用Android的Log系统》中说到的log系统,将四个文件下载下来适配适用。下面直接贴出代码,关键部分已经在源码中添加注释说明了:
MainActivity.java
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
package com.blog4jimmy.hp.selectvsepoll; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.TextView; import android.widget.Toast; public class MainActivity extends AppCompatActivity { private Button testSelectBtn; private Button testEpollBtn; private Button startSelectServer; private Button startEpollServer; private boolean selectIsStart; private boolean epollIsStart; private TextView selectTV; private TextView epollTV; // Used to load the 'native-lib' library on application startup. static { System.loadLibrary("native-lib"); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Example of a call to a native method testSelectBtn = (Button)findViewById(R.id.test_select); testEpollBtn = (Button)findViewById(R.id.test_epoll); startSelectServer = (Button)findViewById(R.id.start_select); startEpollServer = (Button)findViewById(R.id.start_epoll); selectTV = (TextView)findViewById(R.id.select_result); epollTV = (TextView)findViewById(R.id.epoll_result); testSelectBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Toast.makeText(MainActivity.this, "Select test executing, please wait a moment!", Toast.LENGTH_LONG).show(); int result = testSelectClient(); String resultStr = "Select test spend " + Integer.toString(result) + " seconds!"; selectTV.setText(resultStr); } }); testEpollBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Toast.makeText(MainActivity.this, "Epoll test executing, please wait a moment!", Toast.LENGTH_LONG).show(); int result = testEpollClient(); String resultStr = "Epoll test spend " + Integer.toString(result) + " seconds!"; epollTV.setText(resultStr); } }); startSelectServer.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(!selectIsStart) selectIsStart = startSelectServer(); if (selectIsStart) { startSelectServer.setBackgroundColor(getResources().getColor(R.color.colorGreen)); } else { startSelectServer.setBackgroundColor(getResources().getColor(R.color.colorRed)); } } }); startEpollServer.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(!epollIsStart) epollIsStart = startEpollServer(); if (epollIsStart) { startEpollServer.setBackgroundColor(getResources().getColor(R.color.colorGreen)); } else { startEpollServer.setBackgroundColor(getResources().getColor(R.color.colorRed)); } } }); nativeInit(); } /** * A native method that is implemented by the 'native-lib' native library, * which is packaged with this application. */ public native boolean nativeInit(); public native boolean startSelectServer(); public native boolean startEpollServer(); public native int testSelectClient(); public native int testEpollClient(); } |
native-lib.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 |
#include <jni.h> #include <string> #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <sys/time.h> #include <sys/types.h> #include <error.h> #include <unistd.h> #include <sys/socket.h> #include <sys/epoll.h> #include <linux/in.h> #include <netinet/in.h> #include <endian.h> // 这里需要先到博主另一篇博客中将相关源码保存适用 #include "NativeLog.h" #include <arpa/inet.h> #include <memory> #include <cstring> const int SELECT_SERVER_PORT = 8880; const int EPOLL_SERVER_PORT = 8889; const int BUFFER_SIZE = 1024; std::unique_ptr<NativeLog> Log = nullptr; const char *LOG_TAG = "SelectVSEpoll"; JavaVM *java_vm = nullptr; bool select_init_success = false; bool epoll_init_sucess = false; pthread_mutex_t select_mutex; pthread_cond_t select_cond; pthread_mutex_t epoll_mutex; pthread_cond_t epoll_cond; // 封装的条件变量唤醒函数 static void send_cond_signal(pthread_mutex_t *mutex, pthread_cond_t *cond, bool *condition, bool value){ pthread_mutex_lock(mutex); *condition = value; pthread_cond_signal(cond); pthread_mutex_unlock(mutex); return; } // Select服务端执行线程 static void * select_server_start(void *arg) { JNIEnv *env; int ret; int select_server_fd = 0; int client_fd = 0; int client_addr_len = 0; struct sockaddr_in server_addr, client_addr; fd_set select_fd_set; fd_set dump_fd_set; int select_fd[100] = { 0 }; int max_select_fd = 0; int select_count = 0; struct timeval tv; char recvBuf[BUFFER_SIZE] = { 0 }; char sendBuf[BUFFER_SIZE] = { 0 }; char msg[BUFFER_SIZE] = {0}; bool still_running = true; memset(&server_addr, 0, sizeof(struct sockaddr_in)); memset(&client_addr, 0, sizeof(struct sockaddr_in)); // 获取该线程的JNIEnv变量 ret = java_vm->AttachCurrentThread(&env, NULL); if(ret != JNI_OK) { send_cond_signal(&select_mutex, &select_cond, &select_init_success, false); return NULL; } memset(msg, 0, BUFFER_SIZE); // 创建socket端口 select_server_fd = socket(AF_INET, SOCK_STREAM, 0); if(select_server_fd < 0) { Log->e(env, LOG_TAG, "Unable to create socket!"); send_cond_signal(&select_mutex, &select_cond, &select_init_success, false); return NULL; } server_addr.sin_addr.s_addr = inet_addr("127.0.0.1"); server_addr.sin_family = AF_INET; server_addr.sin_port = htons(SELECT_SERVER_PORT); ret = bind(select_server_fd, (struct sockaddr *)&server_addr, sizeof(struct sockaddr_in)); if (ret != 0) { Log->e(env, LOG_TAG, "Unable to bind server!"); send_cond_signal(&select_mutex, &select_cond, &select_init_success, false); close(select_server_fd); return NULL; } listen(select_server_fd, 100); // 走到这里代表Select服务端创建无误,可以通知父线程退出 send_cond_signal(&select_mutex, &select_cond, &select_init_success, true); FD_SET(select_server_fd, &select_fd_set); while(still_running) { FD_ZERO(&dump_fd_set); // select_fd_set用于保存所有文件set,每次循环在此处都需要赋值给dump_fd_set dump_fd_set = select_fd_set; // 获取最大的select监听句柄号 max_select_fd = select_server_fd; for (int i = 0; i < select_count; ++i) { if(select_fd[i] > max_select_fd) max_select_fd = select_fd[i]; } ret = select(max_select_fd + 1, &dump_fd_set, NULL, NULL, NULL); if(ret < 0) { Log->e(env, LOG_TAG, "select syscall failed!"); continue; } memset(recvBuf, 0, BUFFER_SIZE); // 有客户端连接请求 if (FD_ISSET(select_server_fd, &dump_fd_set)) { client_addr_len = sizeof(struct sockaddr_in); client_fd = accept(select_server_fd, (struct sockaddr *)&client_addr, (socklen_t *)&client_addr_len); if (client_fd < 0) { Log->e(env, LOG_TAG, "Accept syscall failed!"); continue; } else { // 保存客户端的文件句柄 select_fd[select_count++] = client_fd; FD_SET(client_fd, &select_fd_set); snprintf(msg, BUFFER_SIZE, "client:%d accept! select_count:%d", client_fd, select_count); Log->d(env, LOG_TAG, msg); } } // 处理客户端的数据发送请求 for (int i = 0; i < select_count; ++i) { int read_len = 0; if (FD_ISSET(select_fd[i], &dump_fd_set)) { read_len = recv(select_fd[i], recvBuf, BUFFER_SIZE, 0); // 对端已关闭或者出现错误,都进行关闭处理 if(read_len <= 0) { if(read_len) snprintf(sendBuf, BUFFER_SIZE, "clientfd:%d recv from client failed! Now select_count:%d", select_fd[i], select_count); else snprintf(sendBuf, BUFFER_SIZE, "clientfd:%d remote has been closed! Now select_count:%d", select_fd[i], select_count); Log->d(env, LOG_TAG, sendBuf); // 获取待关闭的文件句柄 int close_fd = select_fd[i]; // 将select_fd数组的最后一个文件句柄保存到待关闭文件句柄的slot处 select_fd[i] = select_fd[select_count - 1]; // 处于连接状态的socket数量减一 select_count--; FD_CLR(close_fd, &select_fd_set); close(close_fd); // 因为已经将select_fd数组最后一个保存到当前i值处,所以下一循环还是要处理该句柄数据 i--; continue; } memset(sendBuf, 0, BUFFER_SIZE); sprintf(sendBuf, "clientfd:%d-msg:%s", select_fd[i], recvBuf); Log->d(env, LOG_TAG, sendBuf); send(select_fd[i], sendBuf, BUFFER_SIZE, 0); } } } return NULL; } // JNI调用,用于创建Select服务端线程 extern "C" JNIEXPORT jboolean JNICALL Java_com_blog4jimmy_hp_selectvsepoll_MainActivity_startSelectServer(JNIEnv *env, jobject instance) { pthread_t select_pid = 0; int ret = 0; ret = pthread_create(&select_pid, NULL, select_server_start, NULL); if (ret != 0) { Log->e(env, LOG_TAG, "pthread_create return error!"); return false; } Log->d(env, LOG_TAG, "select server thread start sucessfully!"); pthread_detach(select_pid); // 等待Select服务端线程唤醒 pthread_mutex_lock(&select_mutex); pthread_cond_wait(&select_cond, &select_mutex); pthread_mutex_unlock(&select_mutex); // 判断Select服务端是否创建成功 if(!select_init_success) { Log->e(env, LOG_TAG, "Start Select server failed!"); return false; } Log->d(env, LOG_TAG, "Start Select server successfully!"); return true; } // Native环境初始化函数 extern "C" JNIEXPORT jboolean JNICALL Java_com_blog4jimmy_hp_selectvsepoll_MainActivity_nativeInit(JNIEnv *env, jobject instance) { jint ret = 0; Log = std::unique_ptr<NativeLog>(new NativeLog(env)); ret = env->GetJavaVM(&java_vm); if(ret != JNI_OK) { Log->e(env, LOG_TAG, "Cat't not get Java VM!"); return false; } pthread_mutex_init(&select_mutex, NULL); pthread_cond_init(&select_cond, NULL); pthread_mutex_init(&epoll_mutex, NULL); pthread_cond_init(&epoll_cond, NULL); return true; } // Epoll服务端执行线程 static void * epoll_server_start(void *arg) { JNIEnv *env; struct sockaddr_in server_addr, client_addr; int epoll_server_fd = 0; int ret = 0; const int EPOLL_SIZE = 100; const int EPOLL_EVENT_SIZE = 100; struct epoll_event ev, epoll_event[EPOLL_EVENT_SIZE]; int epfd = 0; char recvBuf[BUFFER_SIZE] = {0}; char sendBuf[BUFFER_SIZE] = {0}; char msg[BUFFER_SIZE] = {0}; bool still_running = true; // 获取该线程的JNIEnv变量 ret = java_vm->AttachCurrentThread(&env, NULL); if(ret != JNI_OK) { send_cond_signal(&epoll_mutex, &epoll_cond, &epoll_init_sucess, false); return NULL; } memset(&server_addr, 0, sizeof(struct sockaddr_in)); memset(&client_addr, 0, sizeof(struct sockaddr_in)); epoll_server_fd = socket(AF_INET, SOCK_STREAM, 0); if (epoll_server_fd < 0) { Log->e(env, LOG_TAG, "Unable to create socket!"); send_cond_signal(&epoll_mutex, &epoll_cond, &epoll_init_sucess, false); return NULL; } server_addr.sin_port = htons(EPOLL_SERVER_PORT); server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = inet_addr("127.0.0.1"); ret = bind(epoll_server_fd, (struct sockaddr *)&server_addr, sizeof(struct sockaddr_in)); if (ret < 0) { Log->e(env, LOG_TAG, "Unable to bind server!"); send_cond_signal(&epoll_mutex, &epoll_cond, &epoll_init_sucess, false); close(epoll_server_fd); return NULL; } listen(epoll_server_fd, 100); // 创建epoll文件句柄 epfd = epoll_create(EPOLL_SIZE); if(epfd < 0) { Log->e(env, LOG_TAG, "Unable to create epoll fd!"); send_cond_signal(&epoll_mutex, &epoll_cond, &epoll_init_sucess, false); return NULL; } // 将服务端socket句柄设置到epoll句柄中 ev.data.fd = epoll_server_fd; ev.events = EPOLLIN | EPOLLET; ret = epoll_ctl(epfd, EPOLL_CTL_ADD, epoll_server_fd, &ev); if(ret < 0) { Log->e(env, LOG_TAG, "Unable to add epoll fd!"); send_cond_signal(&epoll_mutex, &epoll_cond, &epoll_init_sucess, false); close(epfd); return NULL; } // 通知父线程epoll服务端执行线程创建成功 send_cond_signal(&epoll_mutex, &epoll_cond, &epoll_init_sucess, true); while (still_running) { int event_size = epoll_wait(epfd, epoll_event, EPOLL_EVENT_SIZE, -1); if (event_size < 0) { Log->e(env, LOG_TAG, "epoll wait failed!"); continue; } if (event_size == 0) { Log->e(env, LOG_TAG, "epoll wait time out!"); continue; } snprintf(msg, BUFFER_SIZE, "%d requests need to process!", event_size); Log->d(env, LOG_TAG, msg); for (int i = 0; i < event_size; ++i) { // 有客户端连接请求 if (epoll_event[i].data.fd == epoll_server_fd) { Log->d(env, LOG_TAG, "New client comming!"); int client_addr_len = sizeof(struct sockaddr_in); int client_fd = accept(epoll_server_fd, (struct sockaddr *)&client_addr, (socklen_t *)&client_addr_len); if(client_fd < 0) { Log->e(env, LOG_TAG, "accept client failed!"); continue; } // 将新连接到来的客户端句柄添加到epoll监听句柄中 struct epoll_event client_epoll_event; client_epoll_event.events = EPOLLIN | EPOLLET; client_epoll_event.data.fd = client_fd; ret = epoll_ctl(epfd, EPOLL_CTL_ADD, client_fd, &client_epoll_event); if(ret < 0) { Log->e(env, LOG_TAG, "epoll add client fd failed!"); close(client_fd); continue; } Log->d(env, LOG_TAG, "Client accept finish!"); continue; } // 处理客户端发送的数据 if (epoll_event[i].events & EPOLLIN) { int client_fd = epoll_event[i].data.fd; int recv_len = recv(client_fd, recvBuf, BUFFER_SIZE, 0); // 客户端已断开获取读取失败,统一处理为关闭端口 if(recv_len <= 0) { if (recv_len == 0) snprintf(msg, BUFFER_SIZE, "clientfd:%d disconnect!", client_fd); else snprintf(msg, BUFFER_SIZE, "clientfd:%d recv from remote failed!", client_fd); Log->d(env, LOG_TAG, msg); // 移除端口在epoll句柄中的监听 struct epoll_event client_epoll_event; client_epoll_event.events = EPOLLIN | EPOLLET; client_epoll_event.data.fd = client_fd; epoll_ctl(epfd, EPOLL_CTL_DEL, client_fd, &client_epoll_event); close(client_fd); continue; } memset(msg, 0, BUFFER_SIZE); snprintf(msg, BUFFER_SIZE, "client:%d - msg:%s", client_fd, recvBuf); Log->d(env, LOG_TAG, msg); send(client_fd, msg, BUFFER_SIZE, 0); } } } } // JNI调用,用于创建epoll服务端执行线程 extern "C" JNIEXPORT jboolean JNICALL Java_com_blog4jimmy_hp_selectvsepoll_MainActivity_startEpollServer(JNIEnv *env, jobject instance) { pthread_t epoll_pid = 0; int ret = 0; ret = pthread_create(&epoll_pid, NULL, epoll_server_start, NULL); if (ret != 0) { Log->e(env, LOG_TAG, "pthread_create return error!"); return false; } Log->d(env, LOG_TAG, "epoll server thread start sucessfully!"); pthread_detach(epoll_pid); // 等待epoll服务端执行线程唤醒 pthread_mutex_lock(&epoll_mutex); pthread_cond_wait(&epoll_cond, &epoll_mutex); pthread_mutex_unlock(&epoll_mutex); if(!epoll_init_sucess) { Log->e(env, LOG_TAG, "Create epoll server failed!"); return false; } Log->d(env, LOG_TAG, "Create epoll server sucessfully"); return true; } // select客户端测试函数 extern "C" JNIEXPORT jint JNICALL Java_com_blog4jimmy_hp_selectvsepoll_MainActivity_testSelectClient(JNIEnv *env, jobject instance) { int ret = 0; struct sockaddr_in server_addr, client_addr; int client_fd[100]; char recvBuf[BUFFER_SIZE]; char sendBuf[BUFFER_SIZE] = "Hello from select client!"; char msg[BUFFER_SIZE]; memset(&server_addr, 0, sizeof(struct sockaddr_in)); memset(&client_addr, 0, sizeof(struct sockaddr_in)); for (int i = 0; i < 100; ++i) { client_fd[i] = socket(AF_INET, SOCK_STREAM, 0); if(client_fd[i] < 0) { Log->e(env, LOG_TAG, "create test select client socket failed!"); return -1; } } server_addr.sin_family = AF_INET; server_addr.sin_port = ntohs(SELECT_SERVER_PORT); server_addr.sin_addr.s_addr = inet_addr("127.0.0.1"); Log->d(env, LOG_TAG, "start to connect to select server!"); time_t start = time(NULL); // 创建100个socket端口 for (int i = 0; i < 100; ++i) { ret = connect(client_fd[i], (struct sockaddr *)&server_addr, sizeof(sockaddr_in)); if (ret < 0) { Log->e(env, LOG_TAG, "select client connect to server failed!"); return -1; } } // 这里单线程测试50000次,所以在Java中调用该测试函数时会出现界面卡主的情况 for (int i = 0; i < 50000; ++i) { srand(time(NULL) + i); int client_no = rand() % 100; int client = client_fd[client_no]; int sendlen = send(client, sendBuf, BUFFER_SIZE, 0); if (sendlen < 0) { Log->e(env, LOG_TAG, "Send msg to server failed!"); } int recvlen = recv(client, recvBuf, BUFFER_SIZE, 0); if (recvlen < 0) { Log->e(env, LOG_TAG, "Recv msg from server failed!"); } else { snprintf(msg, BUFFER_SIZE, "msg from server: %s", recvBuf); Log->d(env, LOG_TAG, msg); } } time_t end = time(NULL); snprintf(msg, BUFFER_SIZE, "Test Client exit! Spend time:%d", (end - start)); Log->d(env, LOG_TAG, msg); // 100个socket端口连接到服务端 for (int i = 0; i < 100; ++i) { snprintf(msg, BUFFER_SIZE, "client:%d close! i = %d", client_fd[i], i); Log->d(env, LOG_TAG, msg); shutdown(client_fd[i], SHUT_RDWR); } // 返回测试结果 return (end-start); } // epoll服务端测试函数 extern "C" JNIEXPORT jint JNICALL Java_com_blog4jimmy_hp_selectvsepoll_MainActivity_testEpollClient(JNIEnv *env, jobject instance) { int ret = 0; struct sockaddr_in server_addr, client_addr; int client_fd[100]; char recvBuf[BUFFER_SIZE]; char sendBuf[BUFFER_SIZE] = "Hello from select client!"; char msg[BUFFER_SIZE]; memset(&server_addr, 0, sizeof(struct sockaddr_in)); memset(&client_addr, 0, sizeof(struct sockaddr_in)); // 创建100个socket句柄 for (int i = 0; i < 100; ++i) { client_fd[i] = socket(AF_INET, SOCK_STREAM, 0); if(client_fd[i] < 0) { Log->e(env, LOG_TAG, "create test select client socket failed!"); return -1; } } server_addr.sin_family = AF_INET; server_addr.sin_port = ntohs(EPOLL_SERVER_PORT); server_addr.sin_addr.s_addr = inet_addr("127.0.0.1"); Log->d(env, LOG_TAG, "start to connect to select server!"); time_t start = time(NULL); for (int i = 0; i < 100; ++i) { ret = connect(client_fd[i], (struct sockaddr *)&server_addr, sizeof(sockaddr_in)); if (ret < 0) { Log->e(env, LOG_TAG, "select client connect to server failed!"); return -1; } } // 单线程测试发送接收50000次,所以点击测试后,主界面会卡主 for (int i = 0; i < 50000; ++i) { srand(time(NULL) + i); int client_no = rand() % 100; int client = client_fd[client_no]; int sendlen = send(client, sendBuf, BUFFER_SIZE, 0); if (sendlen < 0) { Log->e(env, LOG_TAG, "Send msg to server failed!"); } int recvlen = recv(client, recvBuf, BUFFER_SIZE, 0); if (recvlen < 0) { Log->e(env, LOG_TAG, "Recv msg from server failed!"); } else { snprintf(msg, BUFFER_SIZE, "msg from server: %s", recvBuf); Log->d(env, LOG_TAG, msg); } } time_t end = time(NULL); snprintf(msg, BUFFER_SIZE, "Test Client exit! Spend time:%d", (end - start)); Log->d(env, LOG_TAG, msg); for (int i = 0; i < 100; ++i) { shutdown(client_fd[i], SHUT_RDWR); } // 返回测试结果 return (end-start); } |
AndroidManifest.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.blog4jimmy.hp.selectvsepoll"> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.CHANGE_WIFI_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name="com.blog4jimmy.hp.selectvsepoll.MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> |
activity_main.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".MainActivity" android:layout_gravity="center"> <Space android:layout_width="match_parent" android:layout_height="5dp" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Start Select Server" android:layout_gravity="center" android:id="@+id/start_select" android:background="@color/colorRed"/> <Space android:layout_width="match_parent" android:layout_height="5dp" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Start Epoll Server" android:layout_gravity="center" android:id="@+id/start_epoll" android:background="@color/colorRed"/> <Space android:layout_width="match_parent" android:layout_height="10dp" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Test Select" android:id="@+id/test_select" android:layout_gravity="center" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Select test spend 0 seconds!" android:layout_gravity="center" android:id="@+id/select_result" android:textSize="20sp"/> <Space android:layout_width="match_parent" android:layout_height="10dp" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Test Epoll" android:id="@+id/test_epoll" android:layout_gravity="center"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Epoll test spend 0 seconds!" android:layout_gravity="center" android:id="@+id/epoll_result" android:textSize="20sp"/> </LinearLayout> |
对比结果
下面是执行前和之前后的截图,从图片中可以看出select和epoll两者还是有差别了,50000次发送调用epoll比select快8秒。因为例子中只是一个线程进行send和receive,所以这里对比的也只是简单的系统调用性能的比较,没有涉及到多线程send和receive。要是多线程send和receive,估计epoll的性能会比select更好。
最后该项目已经上传到github,欢迎下载:https://github.com/xiaojimmychen/SelectVSEpoll.git