因为C/C++层binder服务的使用过程和Java层的服务使用过程有所不同,所以这里分C/C++和Java两个层次看一下。
C/C++层
经过前面的文章,对一些Binder会用到的类、组件有了认识,所以在这篇博客中要是有遇到前面已经讲过的类,大部分会跳过。如果要讲解Android中各个Service和Binder的关系,那么最好的办法还是找一个具体的Service进行讲解,所以这里还是找MediaServer进行讲解吧,毕竟部分已经有人讲解过,自己再重新分析一遍代码,应该能理解更多。
1. MediaServer的入口函数
MediaServer作为一个C程序,其入口函数当然是main函数了。这里只选择部分需要讲解的代码,其余的大家可以自行查看源码
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
// main_mediaserver.cpp int main(int argc, char **argv) { ………………. // 获取ProcessState实例 sp<ProcessState> proc(ProcessState::self()); // 因为要想ServiceManager中注册服务 // 这里调用defaultServiceManager获取一个IServiceManager对象 sp<IServiceManager> sm = defaultServiceManager(); ALOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate(); // 稍后讲解这个Service MediaPlayerService::instantiate(); …………………. // 开启线程循环,等待Binder数据进行处理 ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); } |
1.1 defaultServiceManager
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); while (gDefaultServiceManager == NULL) { // 这里需要讲解interface_cast以及getContextObject gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); if (gDefaultServiceManager == NULL) sleep(1); } } return gDefaultServiceManager; } |
可以看到,defaultServiceManager中会缓存缓存创建的IServiceManager对象,如果gDefaultServiceManager没有创建过,就会调用ProcessState的getContextObject方法创建BpBinder对象,然后通过interface_cast将创建的BpBinder对象包装成一个IServiceManager,先讲到这里,下面继续看这些过程的代码。
1.2 getContextObject
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/) { return getStrongProxyForHandle(0); } // handle=0 sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle) { sp<IBinder> result; AutoMutex _l(mLock); handle_entry* e = lookupHandleLocked(handle); // 这里e != NULL为true if (e != NULL) { IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { if (handle == 0) { Parcel data; // PINT ServiceManager,保证ServiceManager还活着 status_t status = IPCThreadState::self()->transact( 0, IBinder::PING_TRANSACTION, data, NULL, 0); if (status == DEAD_OBJECT) return NULL; } // 创建BpBinder(0) b = new BpBinder(handle); e->binder = b; if (b) e->refs = b->getWeakRefs(); result = b; } else { result.force_set(b); e->refs->decWeak(this); } } return result; } ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle) { const size_t N=mHandleToObject.size(); if (N <= (size_t)handle) { handle_entry e; e.binder = NULL; e.refs = NULL; status_t err = mHandleToObject.insertAt(e, N, handle+1-N); if (err < NO_ERROR) return NULL; } return &mHandleToObject.editItemAt(handle); } |
getContextObject在此调用getStrongProxyForHandle方法,然后调用到lookupHandleLocked,lookupHandleLocked对根据索引查找对应的资源,如果lookupHandleLocked没有发现有对应的资源项,就会新创建一个handle_entry对象,然后加入进去。接下来在defaultServiceManager中返回result,result就是新建的BpBinder(0)了。之前说过,BpBinder是客户端拿来和服务端BBinder通信用的,这里我们要和服务端ServiceManager进行通信,所以当然是需要获得ServiceManager的BpBinder对象了。
回到defaultServiceManager中,下面的语句:
1 2 |
gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); |
就可以简化成
1 |
gDefaultServiceManager = interface_cast<IServiceManager>(new Bpbinder(0)); |
接下来分析interface_cast为何物?
1.3 interface_cast
1 2 3 4 5 |
template<typename INTERFACE> inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj) { return INTERFACE::asInterface(obj); } |
interface_case是一个函数模板,这里直接将IServiceManager代入到INTERFACE中,可以得到如下代码:
1 2 3 4 |
inline sp<IServiceManager> interface_cast(const sp<IBinder>& obj) { return IServiceManager::asInterface(obj); } |
所以这里是调用IServiceManager的asInterface方法,传入的obj=new BpBinder(0);
1.4 asInterface
这里要讲asInterface,其实这个知识点在之前的博客就有讲到,这个asInterface方法比较特殊,它是通过DECLARE_META_INTERFACE(INTERFACE)宏来定义这个方法, 然后在通过IMPLEMENT_META_INTERFACE(INTERFACE, NAME)这个宏实现这个方法的。asInterface的定义如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
android::sp<I##INTERFACE> I##INTERFACE::asInterface( const android::sp<android::IBinder>& obj) { android::sp<I##INTERFACE> intr; if (obj != NULL) { intr = static_cast<I##INTERFACE*>( obj->queryLocalInterface( I##INTERFACE::descriptor).get()); if (intr == NULL) { intr = new Bp##INTERFACE(obj); } } return intr; } |
将IServiceManager代入,简化后的代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
android::sp<IServiceManager> IServiceManager::asInterface( const android::sp<android::IBinder>& obj) { android::sp<IServiceManager> intr; if (obj != NULL) { intr = static_cast<IServiceManager*>( obj->queryLocalInterface( IServiceManager::descriptor).get()); if (intr == NULL) { intr = new BpServiceManager (obj); } } return intr; } |
所以这里最后返回intr=new BpServiceManager(Bpbinder(0))。
1.5 BpServiceManager
这里的BpServiceManager是从BpInterface类模板中继承而来的,而BpInterface类又是从IServiceManager和BpRefBae这两个类中继承而来。
1 2 3 4 |
class BpServiceManager : public BpInterface<IServiceManager> template<typename INTERFACE> class BpInterface : public INTERFACE, public BpRefBase |
因为BpBinder(0)才是和ServiceManager通信的关键实例,所以需要看一下这个BpBinder保存在什么地方。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
// BpServiceManager构造函数,构造函数中调用了BpInterface的构造函数 // 并将BpBinder(0)实例传入 BpServiceManager(const sp<IBinder>& impl) : BpInterface<IServiceManager>(impl) { } // BpInterface模板类的构造函数,其中这里调用了BpRefBase的构造函数 // 并将BpBinder(0)实例传入 template<typename INTERFACE> inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote) : BpRefBase(remote) { } // BpRefBase构造函数,这里可以看到最后BpBinder保存到了mRemote类变量中 BpRefBase::BpRefBase(const sp<IBinder>& o) : mRemote(o.get()), mRefs(NULL), mState(0) { extendObjectLifetime(OBJECT_LIFETIME_WEAK); if (mRemote) { mRemote->incStrong(this); // Removed on first IncStrong(). mRefs = mRemote->createWeak(this); // Held for our entire lifetime. } } |
所以,这里可以看到,BpBinder(0)这个实例对象最后是保存到了BpRefBase的mRemote类变量中,这个变量会在之后与BBinder进行交互的过程中使用的。
2. MediaPleyerService注册
在MediaServer中,MediaPlayerService是通过MediaPlayerService::instantiate进行注册的,下面看看instantiate函数。
1 2 3 4 |
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService()); } |
一样,上面的代码首先通过defaultServiceManager获取一个IServiceManager对象,然后调用IServiceManager对象的addService函数,将MediaPlayerService对象注册进去。
2.1 addService
1 2 3 4 5 6 7 8 9 10 11 |
virtual status_t addService(const String16& name, const sp<IBinder>& service, bool allowIsolated) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; } |
addService的工作是将数据写到一个Parcel中。然后调用remote()函数返回一个BpBinder对象,调用BpBinder的transact函数与Binder进行交互。根据上一节的只是,这里的remote()返回的应该就是BpBinder(0)这个对象了,不信的大家可以跟踪一下代码。
到了BpBinder还没有完,根据之前博客的知识BpBinder的transact函数其实也是没有和binder进行交互的,交互的工作最后会转交给IPCThreadState对象。最后IPCThreadState中的talkWithDriver函数会调用ioctl,传入BINDER_WRITE_READ命令,实现和Binder驱动的交互,下面我们直接Binder驱动的内容,中间的调用过程,大家可以查看博主的另外两篇博客。
3. Binder驱动部分
上面讲到IPCThreadState的talkWithDriver会通过ioctl调用到binder_ioctl函数,传入的指令是BINDER_WRITE_READ,在之前的《Linux中的binder驱动》一文有分析到BINDER_WRITE_READ后面调用的函数是binder_ioctl_write_read
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
static int binder_ioctl_write_read(struct file *filp, unsigned int cmd, unsigned long arg, struct binder_thread *thread) { int ret = 0; struct binder_proc *proc = filp->private_data; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg; struct binder_write_read bwr; ……………….. // 将用户空间bwr结构体拷贝到内核空间 if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto out; } ………………………. // 将需要写的数据放到目标进程中 if (bwr.write_size > 0) { ret = binder_thread_write(proc, thread, bwr.write_buffer, bwr.write_size, &bwr.write_consumed); trace_binder_write_done(ret); if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto out; } } // 读取自身队列中的数据 if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); trace_binder_read_done(ret); if (!list_empty(&proc->todo)) { …………………… // 唤醒工作线程 wake_up_interruptible(&proc->wait); } if (ret < 0) { // 将内核空间的数据拷贝到用户空间中 if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto out; } } // 将内核空间的数据拷贝到用户空间中 if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto out; } out: return ret; } |
3.1 binder_thread_write
这个函数很长,但是我们只需要看BC_TRANSACTION分支的代码。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) { uint32_t cmd; void __user *buffer = (void __user *)(uintptr_t)binder_buffer; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; while (ptr < end && thread->return_error == BR_OK) { // 获取用户空间传进来的command,这里等于BC_TRANSACTION if (get_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); trace_binder_command(cmd); if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) { binder_stats.bc[_IOC_NR(cmd)]++; proc->stats.bc[_IOC_NR(cmd)]++; thread->stats.bc[_IOC_NR(cmd)]++; } ……………………………………… case BC_TRANSACTION: case BC_REPLY: { struct binder_transaction_data tr; // 拷贝binder_transaction_data到内核空间 if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); // binder_transaction处理数据 binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break; } ……………………… } *consumed = ptr - buffer; } return 0; } |
3.2 binder_transaction
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 |
static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply) { if (reply) { …………………… } else { // handle=0 if (tr->target.handle) { …………………….. } else { // 所以target_node为servicemanager实体 target_node = binder_context_mgr_node; if (target_node == NULL) { return_error = BR_DEAD_REPLY; goto err_no_context_mgr_node; } } ……………………….. if (target_thread) { e->to_thread = target_thread->pid; target_list = &target_thread->todo; target_wait = &target_thread->wait; } else { // 找到servicemanager进程的todo队列 target_list = &target_proc->todo; target_wait = &target_proc->wait; } e->to_proc = target_proc->pid; /* TODO: reuse incoming transaction for reply */ t = kzalloc(sizeof(*t), GFP_KERNEL); binder_stats_created(BINDER_STAT_TRANSACTION); tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE); t->debug_id = ++binder_last_id; e->debug_id = t->debug_id; if (!reply && !(tr->flags & TF_ONE_WAY)) t->from = thread; else t->from = NULL; t->sender_euid = task_euid(proc->tsk); t->to_proc = target_proc; // 通信的目标进程为servicemanager t->to_thread = target_thread; t->code = tr->code; // 通信的code=ADD_SERVICE_TRANSACTION t->flags = tr->flags; // flags=0 t->priority = task_nice(current); #ifdef RT_PRIO_INHERIT t->rt_prio = current->rt_priority; t->policy = current->policy; t->saved_rt_prio = MAX_RT_PRIO; #endif trace_binder_transaction(reply, t, target_node); t->buffer = binder_alloc_buf(target_proc, tr->data_size, tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); if (t->buffer == NULL) { #ifdef MTK_BINDER_DEBUG binder_user_error("%d:%d buffer allocation failed on %d:0\n", proc->pid, thread->pid, target_proc->pid); #endif return_error = BR_FAILED_REPLY; goto err_binder_alloc_buf_failed; } t->buffer->allow_user_free = 0; t->buffer->debug_id = t->debug_id; t->buffer->transaction = t; #ifdef BINDER_MONITOR t->buffer->log_entry = e; #endif t->buffer->target_node = target_node; trace_binder_transaction_alloc_buf(t->buffer); if (target_node) binder_inc_node(target_node, 1, 0, NULL); offp = (binder_size_t *) (t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); // 将用户空间的binder_transaction_data中prt.buffer和ptr.offset拷贝到内核中 if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t) tr->data.ptr.buffer, tr->data_size)) { binder_user_error ("%d:%d got transaction with invalid data ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } if (copy_from_user(offp, (const void __user *)(uintptr_t) tr->data.ptr.offsets, tr->offsets_size)) { binder_user_error ("%d:%d got transaction with invalid offsets ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } off_end = (void *)offp + tr->offsets_size; off_min = 0; for (; offp < off_end; offp++) { struct flat_binder_object *fp; if (*offp > t->buffer->data_size - sizeof(*fp) || *offp < off_min || t->buffer->data_size < sizeof(*fp) || !IS_ALIGNED(*offp, sizeof(u32))) { binder_user_error ("%d:%d got transaction with invalid offset, %lld (min %lld, max %lld)\n", proc->pid, thread->pid, (u64) *offp, (u64) off_min, (u64) (t->buffer->data_size - sizeof(*fp))); return_error = BR_FAILED_REPLY; goto err_bad_offset; } fp = (struct flat_binder_object *)(t->buffer->data + *offp); off_min = *offp + sizeof(struct flat_binder_object); switch (fp->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER:{ struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { // 创建服务所在进程的binder_node实体 node = binder_new_node(proc, fp->binder, fp->cookie); if (node == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_new_node_failed; } if (fp->cookie != node->cookie) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } ref = binder_get_ref_for_node(target_proc, node); if (ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } // 调整type为HANDLE类型 if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->binder = 0; fp->handle = ref->desc; // 设置handle值 fp->cookie = 0; binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); trace_binder_transaction_node_to_ref(t, node, ref); binder_debug(BINDER_DEBUG_TRANSACTION, " node %d u%016llx -> ref %d desc %d\n", node->debug_id, (u64) node->ptr, ref->debug_id, ref->desc); } break; } } if (reply) { ……….. } else if (!(t->flags & TF_ONE_WAY)) { // 非oneway处理 BUG_ON(t->buffer->async_transaction != 0); t->need_reply = 1; t->from_parent = thread->transaction_stack; thread->transaction_stack = t; } else { ………………….. } // 将BINDER_WORK_TRANSACTION添加到目标队列 // 本次通信的目标队列是target_proc->todo t->work.type = BINDER_WORK_TRANSACTION; list_add_tail(&t->work.entry, target_list); // 将BINDER_WORK_TRANSACTION_COMPLETE添加到当前线程的todo队列 tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; list_add_tail(&tcomplete->entry, &thread->todo); // 唤醒等待队列,本次通信的目标队列为target_proc->wait if (target_wait) wake_up_interruptible(target_wait); } |
这Binder驱动分析部分有点难看懂,目前也是没办法全部看懂,留待后续再继续深入。
(注:上面部分摘自:http://gityuan.com/2015/11/14/binder-add-service)
Java层
Java层中的binder和native层的binder在命名上尽量保持了一致,且Java层的binder依托于Native层的binder,事实上Java层的binder最后还是会通过JNI调用到native层的binder API,所以要理解Java层的binder调用,需要先对native层的binder有个基本的认识。
1. Java层binder框架初始化
既然Java层的Binder是通过JNI调用native层的binder,那么当然需要完成相应JNI方法的注册了。记得在Zygote启动一文中有提到如下的代码:
1 2 3 4 5 6 7 |
// AndroidRuntime.cpp中的start函数中 // 设置虚拟机的JNI环境 if (startReg(env) < 0) { ALOGE("Unable to register all android natives\n"); return; } |
这里的startReg(env)中就有注册Java层Binder需要的函数。startReg函数内会调用register_jni_process方法将gRegJNI数组中设计的函数都调用一遍。binder相关的注册函数是:
1 |
REG_JNI(register_android_os_Binder), |
register_android_os_Binder函数在android_util_Binder.cpp文件内,执行的操作如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
int register_android_os_Binder(JNIEnv* env) { // 注册Binder类中的JNI方法 if (int_register_android_os_Binder(env) < 0) return -1; // 注册BinderInternal类中的JNI方法 if (int_register_android_os_BinderInternal(env) < 0) return -1; // 注册BinderProxy类中的JNI方法 if (int_register_android_os_BinderProxy(env) < 0) return -1; ………………. return 0; } |
1.1 int_register_android_os_Binder
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
const char* const kBinderPathName = "android/os/Binder"; static int int_register_android_os_Binder(JNIEnv* env) { jclass clazz = FindClassOrDie(env, kBinderPathName); // 查找android.os.Binde类描述符并保存 gBinderOffsets.mClass = MakeGlobalRefOrDie(env, clazz); // 查找Binder类中的execTransact方法ID符并保存 gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact", "(IJJI)Z"); // 查找Binder类中的mObject字段并保存 gBinderOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J"); // 注册JNI方法 return RegisterMethodsOrDie( env, kBinderPathName, gBinderMethods, NELEM(gBinderMethods)); } |
其中FindClassOrDie等方法都是对原来的JNI方法进行包装的,这里只看一下FindClassOrDie方法的实现
1 2 3 4 5 |
static inline jclass FindClassOrDie(JNIEnv* env, const char* class_name) { jclass clazz = env->FindClass(class_name); LOG_ALWAYS_FATAL_IF(clazz == NULL, "Unable to find class %s", class_name); return clazz; } |
所以:
- FindClassOrDie(env,kBinderPathName)等价于env->FindClass(kBinderPathName)
- MakeGlobalRefOrDie等价于env->MakeGlobalRef()
- GetMethodIDOrDie等价于env->GetMethodIDOrDie
- GetFieldIDOrDie等价于env->GetFieldID()
- RegisterMethodsOrDie这个特殊点,这个等价于Android::registerNativeMethods()
上面将一些常用的Binder成员字段和方法缓存到gBinderOffsets变量中,在之后直接使用以提高效率,最后通过RegisterMathodsOrDie方法将Java层的binder和相应的native函数绑定
1 2 3 4 5 6 7 8 9 10 11 12 13 |
static const JNINativeMethod gBinderMethods[] = { /* name, signature, funcPtr */ { "getCallingPid", "()I", (void*)android_os_Binder_getCallingPid }, { "getCallingUid", "()I", (void*)android_os_Binder_getCallingUid }, { "clearCallingIdentity", "()J", (void*)android_os_Binder_clearCallingIdentity }, { "restoreCallingIdentity", "(J)V", (void*)android_os_Binder_restoreCallingIdentity }, { "setThreadStrictModePolicy", "(I)V", (void*)android_os_Binder_setThreadStrictModePolicy }, { "getThreadStrictModePolicy", "()I", (void*)android_os_Binder_getThreadStrictModePolicy }, { "flushPendingCommands", "()V", (void*)android_os_Binder_flushPendingCommands }, { "init", "()V", (void*)android_os_Binder_init }, { "destroy", "()V", (void*)android_os_Binder_destroy }, { "blockUntilThreadAvailable", "()V", (void*)android_os_Binder_blockUntilThreadAvailable } }; |
1.2 int_register_android_os_BinderInternal
1 2 3 4 5 6 7 8 9 10 11 |
static int int_register_android_os_BinderInternal(JNIEnv* env) { jclass clazz = FindClassOrDie(env, kBinderInternalPathName); gBinderInternalOffsets.mClass = MakeGlobalRefOrDie(env, clazz); gBinderInternalOffsets.mForceGc = GetStaticMethodIDOrDie(env, clazz, "forceBinderGc", "()V"); return RegisterMethodsOrDie( env, kBinderInternalPathName, gBinderInternalMethods, NELEM(gBinderInternalMethods)); } |
类似的,首先将BinderInternal类中一些常用字段和方法ID缓存到gBinderInternalOffsets中,然后调用RegisterMethodsOrDie方法将BinderInternal类中的方法和Native层的方法绑定。gBinderInternalMethods中包含的函数如下:
1 2 3 4 5 6 7 |
static const JNINativeMethod gBinderInternalMethods[] = { /* name, signature, funcPtr */ { "getContextObject", "()Landroid/os/IBinder;", (void*)android_os_BinderInternal_getContextObject }, { "joinThreadPool", "()V", (void*)android_os_BinderInternal_joinThreadPool }, { "disableBackgroundScheduling", "(Z)V", (void*)android_os_BinderInternal_disableBackgroundScheduling }, { "handleGc", "()V", (void*)android_os_BinderInternal_handleGc } }; |
1.3 int_register_android_os_BinderProxy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
static int int_register_android_os_BinderProxy(JNIEnv* env) { jclass clazz = FindClassOrDie(env, "java/lang/Error"); gErrorOffsets.mClass = MakeGlobalRefOrDie(env, clazz); clazz = FindClassOrDie(env, kBinderProxyPathName); gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz); gBinderProxyOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>", "()V"); gBinderProxyOffsets.mSendDeathNotice = GetStaticMethodIDOrDie(env, clazz, "sendDeathNotice", "(Landroid/os/IBinder$DeathRecipient;)V"); gBinderProxyOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J"); gBinderProxyOffsets.mSelf = GetFieldIDOrDie(env, clazz, "mSelf", "Ljava/lang/ref/WeakReference;"); gBinderProxyOffsets.mOrgue = GetFieldIDOrDie(env, clazz, "mOrgue", "J"); clazz = FindClassOrDie(env, "java/lang/Class"); gClassOffsets.mGetName = GetMethodIDOrDie(env, clazz, "getName", "()Ljava/lang/String;"); return RegisterMethodsOrDie( env, kBinderProxyPathName, gBinderProxyMethods, NELEM(gBinderProxyMethods)); } |
again,同样的方式缓存类中的字段和方法ID到gClassOffsets中,然后调用RegisterMethodsOrDie方法将BinderProxy类中的方法和Native层的方法绑定。其中gBinderProxyMethods包含的方法如下:
1 2 3 4 5 6 7 8 9 10 |
static const JNINativeMethod gBinderProxyMethods[] = { /* name, signature, funcPtr */ {"pingBinder", "()Z", (void*)android_os_BinderProxy_pingBinder}, {"isBinderAlive", "()Z", (void*)android_os_BinderProxy_isBinderAlive}, {"getInterfaceDescriptor", "()Ljava/lang/String;", (void*)android_os_BinderProxy_getInterfaceDescriptor}, {"transactNative", "(ILandroid/os/Parcel;Landroid/os/Parcel;I)Z", (void*)android_os_BinderProxy_transact}, {"linkToDeath", "(Landroid/os/IBinder$DeathRecipient;I)V", (void*)android_os_BinderProxy_linkToDeath}, {"unlinkToDeath", "(Landroid/os/IBinder$DeathRecipient;I)Z", (void*)android_os_BinderProxy_unlinkToDeath}, {"destroy", "()V", (void*)android_os_BinderProxy_destroy}, }; |
2.ServiceManager中的addService
1 2 3 4 5 6 7 8 |
public static void addService(String name, IBinder service) { try { // 获取IServiceManager对象,然后调用IServiceManager中的addservice getIServiceManager().addService(name, service, false); } catch (RemoteException e) { Log.e(TAG, "error in addService", e); } } |
2.1 getIServiceManager方法
1 2 3 4 5 6 7 8 9 10 |
private static IServiceManager getIServiceManager() { if (sServiceManager != null) { return sServiceManager; } // Find the service manager sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject()); return sServiceManager; } |
单例模式,调用ServiceManagerNative.asInterface方法将BinderInternal.getContextObject返回的对象转换为所需的IServiceManager对象。下面看看BinderInternal中额getContextObject方法,从上一节中可以留意到,getContextObject是一个native方法,对应native层的android_os_BinderInternal_getContextObject方法。
2.2 getContextObject方法
对应于android_util_Binder.cpp中的android_os_BinderInternal_getContextObject方法。
1 2 3 4 5 |
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz) { sp<IBinder> b = ProcessState::self()->getContextObject(NULL); return javaObjectForIBinder(env, b); } |
明显,这里调用ProcessState中的getContextObject方法,传进去的NULL即0,本篇前面C/C++层中讲到过这个方法,最后会返回一个BpBinder(0)对象,然后将BpBinder(0)作为参数传给javaObjecForIBinder,这个主要是将C/C++层的BpBinder对象转换为Java层的BinderProxy对象。
2.3 javaObjectForIBinder方法
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val) { if (val == NULL) return NULL; // 检查val对象是否是Binder的子类,放回false if (val->checkSubclass(&gBinderOffsets)) { // One of our own! jobject object = static_cast<JavaBBinder*>(val.get())->object(); LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object); return object; } // For the rest of the function we will hold this lock, to serialize // looking/creation of Java proxies for native Binder proxies. AutoMutex _l(mProxyLock); // 第一次object还没有创建,所以这里objcet为NULL jobject object = (jobject)val->findObject(&gBinderProxyOffsets); if (object != NULL) { jobject res = jniGetReferent(env, object); if (res != NULL) { ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res); return res; } LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get()); android_atomic_dec(&gNumProxyRefs); val->detachObject(&gBinderProxyOffsets); env->DeleteGlobalRef(object); } // 调用NewObject创建一个BinderProxy对象 object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor); if (object != NULL) { LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object); //将BinderProxy.mObject成员变量设置为BpBinder对象 env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get()); val->incStrong((void*)javaObjectForIBinder); jobject refObject = env->NewGlobalRef( env->GetObjectField(object, gBinderProxyOffsets.mSelf)); // 将BinderProxy对象信息附加到BpBinder的mObject成员中 val->attachObject(&gBinderProxyOffsets, refObject, jnienv_to_javavm(env), proxy_cleanup); // Also remember the death recipients registered on this proxy sp<DeathRecipientList> drl = new DeathRecipientList; drl->incStrong((void*)javaObjectForIBinder); // BinderProxy.mOrgue成员变量记录死亡通知对象 env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get())); // Note that a new object reference has been created. android_atomic_inc(&gNumProxyRefs); incRefsCreated(env); } // 返回BinderProxy对象 return object; } |
所以所以这里是根据C/C++层的BpBinder对象,创建出Java层使用的BinderProxy对象,并且将BpBinder对象保存到BinderProxy.mObject成员变量中。所以
1 |
ServiceManagerNative.asInterface(BinderInternal.getContextObject()) |
调用的返回值层次大概是这样的
1 |
ServiceManagerNative.asInterface(new BinderProxy(new Bpbinder(0))) |
3. ServiceManagerNative.asInterface
根据上面的分析,传给asInterface的参数是一个BinderProxy对象。
1 2 3 4 5 6 7 8 9 10 11 12 13 |
static public IServiceManager asInterface(IBinder obj) { if (obj == null) { return null; } IServiceManager in = (IServiceManager)obj.queryLocalInterface(descriptor); if (in != null) { return in; } return new ServiceManagerProxy(obj); } |
上面的代码将基本的BinderProxy对象包装成一个ServiceManagerProxy对象。下面看看ServiceManagerProxy的构造函数。
3.1 ServiceManagerProxy构造函数
1 2 3 |
public ServiceManagerProxy(IBinder remote) { mRemote = remote; } |
ServiceManagerProxy的构造函数将传入的BinderProxy对象保存到mRemote成员变量中,按照前面的分析,这个BinderProxy实际就是BpBinder(0)。不过这里是Java层,当然不能直接使用了。接下来就看ServiceManagerProxy的addservice函数了。
4 ServiceManagerProxy.addService
1 2 3 4 5 6 7 8 9 10 11 12 |
public void addService(String name, IBinder service, boolean allowIsolated) throws RemoteException { Parcel data = Parcel.obtain(); Parcel reply = Parcel.obtain(); data.writeInterfaceToken(IServiceManager.descriptor); data.writeString(name); data.writeStrongBinder(service); data.writeInt(allowIsolated ? 1 : 0); mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0); reply.recycle(); data.recycle(); } |
上面的调用和C/C++层有点类似啊,将数据写入到Parcel中。
4.1 writeStrongBinder
1 2 3 |
public final void writeStrongBinder(IBinder val) { nativeWriteStrongBinder(mNativePtr, val); } |
搞事情,Java层总是什么都不做,然后就调用到Native去了。对应的代码在android_os_Parcel.cpp文件中,在上面有注册JNI函数时和Java层函数名字的对应关系,这里找到nativeWriteStrongBinder对应的JNI函数时android_os_Parcel_writeSringBinder
1 2 3 4 5 6 7 8 9 10 11 12 |
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object) { // 将Java层的Parcel转换成native层的Parcel对象 Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr); if (parcel != NULL) { // 下面分析IBinderForJavaObject const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object)); if (err != NO_ERROR) { signalExceptionForError(env, clazz, err); } } } |
4.2 ibinderForJavaObject
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj) { if (obj == NULL) return NULL; // 查看对象是否是android.os.binder对象,一般通过AIDL生成的service文件中 // 都可以看到里面有个stub子类,而这个子类就是继承自android.os.binder的 if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) { // 如果是,这里先获得保存到gBinderOffsets中的JavaBBinderHolder对象 // 然后使用其get方法,获得一个JavaBBinder对象 JavaBBinderHolder* jbh = (JavaBBinderHolder*) env->GetLongField(obj, gBinderOffsets.mObject); return jbh != NULL ? jbh->get(env, obj) : NULL; } // 如果这个Java对象是BinderProxy类,就返回Native层的BpBinder对象 if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) { return (IBinder*) env->GetLongField(obj, gBinderProxyOffsets.mObject); } ALOGW("ibinderForJavaObject: %p is not a Binder object", obj); return NULL; } |
4.3 JavaBBInderHolder.get()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
sp<JavaBBinder> get(JNIEnv* env, jobject obj) { AutoMutex _l(mLock); sp<JavaBBinder> b = mBinder.promote(); // 第一次进来这里为NULL if (b == NULL) { // new一个JavaBBinder对象 b = new JavaBBinder(env, obj); mBinder = b; ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%" PRId32 "\n", b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount()); } return b; } |
JavaBBinderHolder中有一个成员变量mBInder,保存创建的JavaBBinder对象。
4.4 JavaBBinder构造函数
1 2 3 4 5 6 7 |
JavaBBinder(JNIEnv* env, jobject object) : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)) { ALOGV("Creating JavaBBinder %p\n", this); android_atomic_inc(&gNumLocalRefs); incRefsCreated(env); } |
JavaBBinder类是继承了BBinder类的。
所以parcel->writeStrongBinder(ibinderForJavaObject(env, object)),最后会转变成parcel->writeStrongBinder(new JavaBBinder(env, obj))。下面接着分析C/C++层中Parcel的writeStringBinder方法
4.5 writeStrongBinder
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
status_t Parcel::writeStrongBinder(const sp<IBinder>& val) { return flatten_binder(ProcessState::self(), val, this); } status_t flatten_binder(const sp<ProcessState>& /*proc*/, const sp<IBinder>& binder, Parcel* out) { flat_binder_object obj; obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; if (binder != NULL) { IBinder *local = binder->localBinder(); if (!local) { BpBinder *proxy = binder->remoteBinder(); if (proxy == NULL) { ALOGE("null proxy"); } const int32_t handle = proxy ? proxy->handle() : 0; // 远程Binder obj.type = BINDER_TYPE_HANDLE; obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */ obj.handle = handle; obj.cookie = 0; } else { // 本地binder,进入这个分支 obj.type = BINDER_TYPE_BINDER; obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs()); obj.cookie = reinterpret_cast<uintptr_t>(local); } } else { obj.type = BINDER_TYPE_BINDER; obj.binder = 0; obj.cookie = 0; } return finish_flatten_binder(binder, obj, out); } |
将Binder对象扁平化,这里按照Binder对象是远程对象还是本地对象进行区分,如果是本地binder对象,这将Binder对象的实体指针记录在cookie中,否则就记录在handle中。
4.6 BinderProxy.transact
接下来我们回到addservice中,分析其中的mRemote.transact函数。之前就有分析到,mRemote其实就是一个BinderProxy对象。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException { Binder.checkParcel(this, code, data, "Unreasonably large binder buffer"); // 通过JNI调用到native层的函数 return transactNative(code, data, reply, flags); } static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj, jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException { if (dataObj == NULL) { jniThrowNullPointerException(env, NULL); return JNI_FALSE; } // 将Java层Parcel对象转换成native层Parcel对象 Parcel* data = parcelForJavaObject(env, dataObj); if (data == NULL) { return JNI_FALSE; } // 将Java层Parcel对象转换成native层Parcel对象 Parcel* reply = parcelForJavaObject(env, replyObj); if (reply == NULL && replyObj != NULL) { return JNI_FALSE; } // 上面有分析到,gBinderProxyOffsets.mObject保存的是BpBinder(0) // 所以这里的target指向的就是BpBinder(0) IBinder* target = (IBinder*) env->GetLongField(obj, gBinderProxyOffsets.mObject); if (target == NULL) { jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!"); return JNI_FALSE; } ALOGV("Java code calling transact on %p in Java object %p with code %" PRId32 "\n", target, obj, code); bool time_binder_calls; int64_t start_millis; if (kEnableBinderSample) { // Only log the binder call duration for things on the Java-level main thread. // But if we don't time_binder_calls = should_time_binder_calls(); if (time_binder_calls) { start_millis = uptimeMillis(); } } // 这里直接调用BpBinder的transact函数 status_t err = target->transact(code, *data, reply, flags); //if (reply) printf("Transact from Java code to %p received: ", target); reply->print(); if (kEnableBinderSample) { if (time_binder_calls) { conditionally_log_binder_call(start_millis, target, code); } } // 传输结果判断 if (err == NO_ERROR) { return JNI_TRUE; } else if (err == UNKNOWN_TRANSACTION) { return JNI_FALSE; } signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize()); return JNI_FALSE; } |
上面的代码最后也是调用到BpBinder的transact函数,之后的流程就和C/C++的流程一样了,大家可以按照C/C++层的流程接下去分析。