上一篇我们介绍了多线程的一些概念,本篇我们主要探究iOS开发中经常会使用到的多线程技术GCD
。
GCD的概念 GCD
的全称是 Grand Central Dispatch。它是由纯 C 语言实现,提供了非常多强大的函数。它有如下优势:
GCD 是苹果公司为多核的并行运算提出的解决方案
GCD 会自动利用更多的CPU内核(比如双核、四核)
GCD 会自动管理线程的生命周期(创建线程、调度任务、销毁线程)
程序员只需要告诉 GCD 想要执行什么任务,不需要编写任何线程管理代码。
总结来说GCD就是将任务添加到队列,并指定任务执行的函数。
GCD的基本使用 一般情况下我们会这样使用GCD
1 2 3 4 5 6 7 8 //创建任务block dispatch_block_t block = ^{ NSLog(@"这是任务"); }; //创建串行队列 dispatch_queue_t queue = dispatch_queue_create("com.lg.cn", NULL); //执行任务 dispatch_async(queue, block);
总结来看就是三部:
创建任务块dispatch_block_t
创建队列dispatch_queue_t
将任务添加到队列并执行任务函数dispatch_async
或dispatch_sync
还有两个概念其实我们也很熟悉了就是函数
和队列
。
函数包括同步函数(dispatch_sync)
和异步函数(dispatch_async)
。
队列包括串行队列(DISPATCH_QUEUE_SERIAL)
和并行队列(DISPATCH_QUEUE_CONCURRENT)
主队列 主队列(dispatch_queue_main_t
)是我们运行程序就会启动的一个队列,它是主线程所在的队列,会贯穿我们应用运行的始终。通过我们dispatch_get_main_queue
函数的注释我们看到主队列是一个串行队列,这也不难理解,因为串行队列里的任务会逐个顺序执行,而我们主线程上的任务也符合这一特性。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 * Because the main queue doesn' t behave entirely like a regular serial queue , * it may have unwanted side-effects when used in processes that are not UI apps * (daemons). For such processes, the main queue should be avoided. * * @see dispatch_queue_main_t * * @result * Returns the main queue . This queue is created automatically on behalf of * the main thread before main() is called. */ DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW dispatch_queue_main_t dispatch_get_main_queue(void ) { return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t , _dispatch_main_q); }
我们下载libdispatch
的源码,看一下dispatch_get_main_queue
的源码,调用的是DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q)
,它是一个宏
1 #define DISPATCH_GLOBAL_OBJECT(type, object) (static_cast<type>(&(object)))
可以看到第一个参数type
是类型,第二个参数object
参数真正的参数也就是_dispatch_main_q
,我们全局搜索_dispatch_main_q =
1 2 3 4 5 6 7 8 9 10 11 struct dispatch_queue_static_s _dispatch_main_q = { DISPATCH_GLOBAL_OBJECT_HEADER(queue_main), #if !DISPATCH_USE_RESOLVERS .do_targetq = _dispatch_get_default_queue(true ), #endif .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1 ) | DISPATCH_QUEUE_ROLE_BASE_ANON, .dq_label = "com.apple.main-thread" , .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1 ), .dq_serialnum = 1 , };
发现_dispatch_main_q
是一个结构体。可以看到dispatch_queue_main_t
是一个结构体dispatch_queue_static_s
。
串行队列和并发队列源码上的区分 上面我们已经知道,gcd的队列的本质是dispatch_queue_static_s
结构体,结构体中那个成员标示的是串行还是并行队列呢?我们源码中找答案。我们的队列是通过dispatch_queue_create
函数创建的,它的第二个参数传入的是队列的类型,我们源码中找dispatch_queue_create
函数的定义:
1 2 3 4 5 6 dispatch_queue_t dispatch_queue_create(const char *label, dispatch_queue_attr_t attr) { return _dispatch_lane_create_with_target(label, attr, DISPATCH_TARGET_QUEUE_DEFAULT, true ); }
跟着调用继续查找_dispatch_lane_create_with_target
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 DISPATCH_NOINLINE static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, bool legacy) { dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa); dispatch_qos_t qos = dqai.dqai_qos; #if !HAVE_PTHREAD_WORKQUEUE_QOS if (qos == DISPATCH_QOS_USER_INTERACTIVE) { dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED; } if (qos == DISPATCH_QOS_MAINTENANCE) { dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND; } #endif _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit; if (overcommit != _dispatch_queue_attr_overcommit_unspecified && tq) { if (tq->do_targetq) { DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and " "a non-global target queue" ); } } if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) { if (overcommit == _dispatch_queue_attr_overcommit_unspecified) { if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) { overcommit = _dispatch_queue_attr_overcommit_enabled; } else { overcommit = _dispatch_queue_attr_overcommit_disabled; } } if (qos == DISPATCH_QOS_UNSPECIFIED) { qos = _dispatch_priority_qos(tq->dq_priority); } tq = NULL ; } else if (tq && !tq->do_targetq) { if (overcommit != _dispatch_queue_attr_overcommit_unspecified) { DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute " "and use this kind of target queue" ); } } else { if (overcommit == _dispatch_queue_attr_overcommit_unspecified) { overcommit = dqai.dqai_concurrent ? _dispatch_queue_attr_overcommit_disabled : _dispatch_queue_attr_overcommit_enabled; } } if (!tq) { tq = _dispatch_get_root_queue( qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos, overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq; if (unlikely(!tq)) { DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute" ); } } if (legacy) { if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) { legacy = false ; } } const void *vtable; dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0 ; if (dqai.dqai_concurrent) { vtable = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); } switch (dqai.dqai_autorelease_frequency) { case DISPATCH_AUTORELEASE_FREQUENCY_NEVER: dqf |= DQF_AUTORELEASE_NEVER; break ; case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM: dqf |= DQF_AUTORELEASE_ALWAYS; break ; } if (label) { const char *tmp = _dispatch_strdup_if_mutable(label); if (tmp != label) { dqf |= DQF_LABEL_NEEDS_FREE; label = tmp; } } dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof (struct dispatch_lane_s)); _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1 , DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0 )); dq->dq_label = label; dq->dq_priority = _dispatch_priority_make((dispatch_qos_t )dqai.dqai_qos, dqai.dqai_relpri); if (overcommit == _dispatch_queue_attr_overcommit_enabled) { dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT; } if (!dqai.dqai_inactive) { _dispatch_queue_priority_inherit_from_target(dq, tq); _dispatch_lane_inherit_wlh_from_target(dq, tq); } _dispatch_retain(tq); dq->do_targetq = tq; _dispatch_object_debug(dq, "%s" , __func__); return _dispatch_trace_queue_create(dq)._dq; }
这个函数比较长,按照管理我们还是先看返回值_dispatch_trace_queue_create(dq)._dq
。重点看dq
怎么创建的。所以我们主要dq
的创建及成员赋值的过程。
1 2 3 4 5 dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof (struct dispatch_lane_s)); _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1 , DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0 ));
发现_dispatch_queue_init
的实参有dqai.dqai_concurrent
还是并行队列的判断。我们定位到_dispatch_queue_init
的第三个参数,看看其赋值:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 static inline dispatch_queue_class_t _dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf, uint16_t width , uint64_t initial_state_bits) { uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width ); dispatch_queue_t dq = dqu._dq; dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK | DISPATCH_QUEUE_INACTIVE)) == 0 ); if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) { dq->do_ref_cnt += 2 ; if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) { dq->do_ref_cnt++; } } dq_state |= initial_state_bits; dq->do_next = DISPATCH_OBJECT_LISTLESS; dqf |= DQF_WIDTH(width ); os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed); dq->dq_state = dq_state; dq->dq_serialnum = os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed); return dqu; }
看到第三个参数width
复制的地方是dqf |= DQF_WIDTH(width);
即:
dispatch_queue_t
的继承链是什么样子呢,我们在代码中按cmd
+dispatch_queue_t
会跳转到DISPATCH_DECL(dispatch_queue);
代码,它是dispatch_queue_t
的定义。我们在libdispatch源码中搜索DISPATCH_DECL(
找定义的地方,根据上下文if
判断下面这行是oc
情况下的定义:
1 2 3 #define DISPATCH_DECL(name) \ typedef struct name ##_s : public dispatch_object_s {} *name##_t
可以看到继承链为dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s
我们观察一下的机构
1 2 3 4 struct dispatch_queue_s { DISPATCH_QUEUE_CLASS_HEADER(queue , void *__dq_opaque1); } DISPATCH_ATOMIC64_ALIGN;
继续看DISPATCH_QUEUE_CLASS_HEADER
结构体
1 2 3 4 5 6 7 8 #define _DISPATCH_QUEUE_CLASS_HEADER(x, __pointer_sized_field__) \ DISPATCH_OBJECT_HEADER(x); \ __pointer_sized_field__; \ DISPATCH_UNION_LE(uint64_t volatile dq_state, \ dispatch_lock dq_state_lock, \ uint32_t dq_state_bits \ ) #endif
继承于DISPATCH_OBJECT_HEADER
继续搜索:
1 2 3 4 5 6 7 8 9 10 #define _DISPATCH_OBJECT_HEADER(x) \ struct _os_object_s _as_os_obj [0]; \ OS_OBJECT_STRUCT_HEADER(dispatch_##x); \ struct dispatch_ ##x ##_s *volatile do_next ; \ struct dispatch_queue_s *do_targetq ; \ void *do_ctxt; \ union { \ dispatch_function_t DISPATCH_FUNCTION_POINTER do_finalizer; \ void *do_introspection_ctxt; \ }
最后继承的是_os_object_s
,所以完整继承链就是
dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s -> _os_object_s
函数的调用时机 1 2 3 dispatch_sync(dispatch_get_global_queue(0, 0), ^{ NSLog(@"函数调用了"); });
我们本小节探究函数的block
参数是什么时候调用的,我们以同步函数为例,全局搜索dispatch_sync
1 2 3 4 5 6 7 8 9 10 DISPATCH_NOINLINE void dispatch_sync(dispatch_queue_t dq, dispatch_block_t work) { uintptr_t dc_flags = DC_FLAG_BLOCK; if (unlikely(_dispatch_block_has_private_data(work))) { return _dispatch_sync_block_with_privdata(dq, work, dc_flags); } _dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags); }
work
为我们传入的block
,所以我们看和work
参数相关的代码
_dispatch_Block_invoke
函数的实现:
1 2 #define _dispatch_Block_invoke(bb) \ ((dispatch_function_t )((struct Block_layout *)bb)->invoke)
可以看到_dispatch_Block_invoke函数主要是调用了work
的invoke
方法。
我们再看_dispatch_sync_f
的实现:
1 2 3 4 5 6 static void _dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { _dispatch_sync_f_inline(dq, ctxt, func, dc_flags); }
继续跟踪_dispatch_sync_f_inline
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 DISPATCH_ALWAYS_INLINE static inline void _dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { if (likely(dq->dq_width == 1 )) { return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags); } if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0 , "Queue type doesn't support dispatch_sync" ); } dispatch_lane_t dl = upcast(dq)._dl; if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) { return _dispatch_sync_f_slow(dl, ctxt, func, 0 , dl, dc_flags); } if (unlikely(dq->do_targetq->do_targetq)) { return _dispatch_sync_recurse(dl, ctxt, func, dc_flags); } _dispatch_introspection_sync_begin(dl); _dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG( _dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags))); }
_dispatch_sync_f_inline
函数的ctxt
和func
参数是和block相关的参数,调用的地方比较多,我们在demo工程打一个符号断点看一下,到底执行了哪个方法:
我们发现实际调用的是_dispatch_sync_f_slow
函数
所以我们继续看_dispatch_sync_f_slow
的实现
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 DISPATCH_NOINLINE static void _dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt, dispatch_function_t func, uintptr_t top_dc_flags, dispatch_queue_class_t dqu, uintptr_t dc_flags) { dispatch_queue_t top_dq = top_dqu._dq; dispatch_queue_t dq = dqu._dq; if (unlikely(!dq->do_targetq)) { return _dispatch_sync_function_invoke(dq, ctxt, func); } pthread_priority_t pp = _dispatch_get_priority(); struct dispatch_sync_context_s dsc = { .dc_flags = DC_FLAG_SYNC_WAITER | dc_flags, .dc_func = _dispatch_async_and_wait_invoke, .dc_ctxt = &dsc, .dc_other = top_dq, .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG, .dc_voucher = _voucher_get(), .dsc_func = func, .dsc_ctxt = ctxt, .dsc_waiter = _dispatch_tid_self(), }; _dispatch_trace_item_push(top_dq, &dsc); __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq); if (dsc.dsc_func == NULL ) { dispatch_queue_t stop_dq = dsc.dc_other; return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags); } _dispatch_introspection_sync_begin(top_dq); _dispatch_trace_item_pop(top_dq, &dsc); _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags DISPATCH_TRACE_ARG(&dsc)); }
关注的参数依然是ctxt
和func
,和上一步骤类似,我们继续打符号断点_dispatch_sync_invoke_and_complete_recurse
和_dispatch_sync_function_invoke
来看具体执行的代码。
实际调用了_dispatch_sync_function_invoke
函数:
1 2 3 4 5 6 static void _dispatch_sync_function_invoke(dispatch_queue_class_t dq, void *ctxt, dispatch_function_t func) { _dispatch_sync_function_invoke_inline(dq, ctxt, func); }
继续跟进_dispatch_sync_function_invoke_inline
函数:
1 2 3 4 5 6 7 8 9 10 11 DISPATCH_ALWAYS_INLINE static inline void _dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt, dispatch_function_t func) { dispatch_thread_frame_s dtf; _dispatch_thread_frame_push(&dtf, dq); _dispatch_client_callout(ctxt, func); _dispatch_perfmon_workitem_inc(); _dispatch_thread_frame_pop(&dtf); }
ctxt
和func
的调用在_dispatch_client_callout
函数,有多个实现:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 DISPATCH_NOINLINE void _dispatch_client_callout(void *ctxt, dispatch_function_t f) { _dispatch_get_tsd_base(); void *u = _dispatch_get_unwind_tsd(); if (likely(!u)) return f(ctxt); _dispatch_set_unwind_tsd(NULL ); f(ctxt); _dispatch_free_unwind_tsd(); _dispatch_set_unwind_tsd(u); } #undef _dispatch_client_callout void _dispatch_client_callout(void *ctxt, dispatch_function_t f) { @try { return f(ctxt); } @catch (...) { objc_terminate(); } }
虽然多处实现,但是调用block
的代码都是f(ctxt)
。
所以block
的调用链是:dispatch_sync
->_dispatch_sync_f
->_dispatch_sync_f_inline
->_dispatch_sync_f_slow
->_dispatch_sync_function_invoke
->_dispatch_client_callout
->f(ctxt)
。
同理异步函数dispatch_async
用相同的方法也能探究出一个调用链,最后调用的也是f(ctxt)
,感兴趣的童鞋可以探究一下。