在 Java 中使用 WebRTC 传输视频——使用 Native API

引言

上篇文章中,我们已经将一些准备工作处理完了,所以这篇文章,我就来分享一下我是怎么在Java中使用WebRTC Native API的。

使用Native APIs

创建PeerConnectionFactory

之前介绍Native APIs的时候就提过,WebRTC有三个主要线程来处理各项事务,这里我们先通过API来创建相应的线程,顺便一提说这个WebRTC提供的线程库真的很强大,你甚至可以把它作为一个跨平台的线程库来时候。如果有机会,我以后会专门写一篇文章介绍它的实现。书归正传,在创建线程的时候有一个重点的点就是创建NetworkThread时需要使用CreateWithSocketServer方法

   void RTC::InitThreads() {
       signaling_thread = rtc::Thread::Create();
       signaling_thread->SetName("signaling", nullptr);
       RTC_CHECK(signaling_thread->Start()) << "Failed to start thread";
       WEBRTC_LOG("Original socket server used.", INFO);
       worker_thread = rtc::Thread::Create();
       worker_thread->SetName("worker", nullptr);
       RTC_CHECK(worker_thread->Start()) << "Failed to start thread";
       network_thread = rtc::Thread::CreateWithSocketServer();
       network_thread->SetName("network", nullptr);
       RTC_CHECK(network_thread->Start()) << "Failed to start thread";
   }

此外如果您像我一样,有特殊的音频采集需求的话,就需要自己实现一个自己的AudioDeviceModule,这里有一个注意的内容是创建AudioDeviceModule的过程必须在工作线程中进行,而且我们也需要在工作线程中释放该对象

   void RTC::Init(jobject audio_capturer, jobject video_capturer) { //初始化PeerConnectionFactory过程
       this->video_capturer = video_capturer;
       InitThreads(); //初始化线程
       audio_device_module = worker_thread->Invoke<rtc::scoped_refptr<webrtc::AudioDeviceModule>>(
               RTC_FROM_HERE,
               rtc::Bind(
                       &RTC::InitJavaAudioDeviceModule,
                       this,
                       audio_capturer)); //在工作线程中初始化AudioDeviceModule
       WEBRTC_LOG("After fake audio device module.", INFO);
       InitFactory();
   }

   //通过Java获取音频数据的AudioDeviceModule,之后会详细讲其具体的实现
   rtc::scoped_refptr<webrtc::AudioDeviceModule> RTC::InitJavaAudioDeviceModule(jobject audio_capturer) {
       RTC_DCHECK(worker_thread.get() == rtc::Thread::Current());
       WEBRTC_LOG("Create fake audio device module.", INFO);
       auto result = new rtc::RefCountedObject<FakeAudioDeviceModule>(
               FakeAudioDeviceModule::CreateJavaCapturerWrapper(audio_capturer),
               FakeAudioDeviceModule::CreateDiscardRenderer(44100));
       WEBRTC_LOG("Create fake audio device module finished.", INFO);
       is_connect_to_audio_card = true;
       return result;
   }

   ...
   //释放AudioDeviceModule的过程
   worker_thread->Invoke<void>(RTC_FROM_HERE, rtc::Bind(&RTC::ReleaseAudioDeviceModule, this));
   ...

   //因为audio_device_module是以rtc::RefCountedObject的形式存储的,它其实是一个计数指针,当该指针的引用数为0时,会自动调用对应实例的析构函数,所以我们在这里只需要将其赋值为nullptr即可
   void RTC::ReleaseAudioDeviceModule() {
       RTC_DCHECK(worker_thread.get() == rtc::Thread::Current());
       audio_device_module = nullptr;
   }

有了三个关键线程和AudioDeviceModule之后,就可以创建PeerConnectionFactory了,我这里因为业务的需要,会有一些端口的限制,我也在这里进行了初始化,我们将在创建PortAllocator的时候使用它。看到这里您可能会有疑惑,为什么视频采集的注入和音频采集的注入不是在同一个地方进行的,那么你不是一个人,我也很疑惑=。=,我甚至觉得SocketFactory也应该丢到PeerConnectionFactory里管理,这样就不用每次创建PeerConnection的时候自己创建一个PortAllocator。

   void RTC::InitFactory() {
       //创建带端口和IP限制的SocketFacotry
       socket_factory.reset(
               new rtc::SocketFactoryWrapper(network_thread.get(), this->white_private_ip_prefix, this->min_port,
                                             this->max_port));
       network_manager.reset(new rtc::BasicNetworkManager());
       //这里使用到了我自己实现的视频编码器,这部分我也会在后续进行详细介绍
       peer_connection_factory = webrtc::CreatePeerConnectionFactory(
               network_thread.get(), worker_thread.get(), signaling_thread.get(), audio_device_module,
               webrtc::CreateBuiltinAudioEncoderFactory(), webrtc::CreateBuiltinAudioDecoderFactory(),
               CreateVideoEncoderFactory(hardware_accelerate), CreateVideoDecoderFactory(),
               nullptr, nullptr);
   }

诚然,在创建PeerConnectionFactory的过程中,有许多和我想法不一样的接口设计,我觉得可能是因为我的使用场景并不是常规使用场景,这样WebRTC的接口就显得不是很顺手。总之,PeerConnectionFactory也算是整出来了,整理一下整个过程就是,创建线程->创建音频采集模块->创建EncoderFactory->实例化PeerConnectionFactory。

创建PeerConnection

有了PeerConnectionFactory之后,我们就可以通过它来创建连接了。在这一步,我们需要提供Ice Server的相关信息,而且我在这里使用到了上一步中创建的SocketFactory来创建PortAllocator,从而达到了限制端口的目的。此外我还在这一步中通过调用PeerConnection的API,添加了最大传输速度的限制。

   //创建PeerConnection
   PeerConnection *
   RTC::CreatePeerConnection(PeerConnectionObserver *peerConnectionObserver, std::string uri,
                             std::string username, std::string password, int max_bit_rate) {
       //传递Ice Server信息
       webrtc::PeerConnectionInterface::RTCConfiguration configuration;
       webrtc::PeerConnectionInterface::IceServer ice_server;
       ice_server.uri = std::move(uri);
       ice_server.username = std::move(username);
       ice_server.password = std::move(password);
       configuration.servers.push_back(ice_server);
       //禁用TCP协议
       configuration.tcp_candidate_policy = webrtc::PeerConnectionInterface::TcpCandidatePolicy::kTcpCandidatePolicyDisabled;
       //减少音频延迟
       configuration.audio_jitter_buffer_fast_accelerate = true;
       //利用之前创建的SocketFacotry生成PortAllocator达到限制端口的效果
       std::unique_ptr<cricket::PortAllocator> port_allocator(
               new cricket::BasicPortAllocator(network_manager.get(), socket_factory.get()));
       port_allocator->SetPortRange(this->min_port, this->max_port);
       //创建PeerConnection并限制比特率
       return new PeerConnection(peer_connection_factory->CreatePeerConnection(
               configuration, std::move(port_allocator), nullptr, peerConnectionObserver), peerConnectionObserver,
                                 is_connect_to_audio_card, max_bit_rate);
   }

   //调用API限制比特率
   void PeerConnection::ChangeBitrate(int bitrate) {
       auto bit_rate_setting = webrtc::BitrateSettings();
       bit_rate_setting.min_bitrate_bps = 30000;
       bit_rate_setting.max_bitrate_bps = bitrate;
       bit_rate_setting.start_bitrate_bps = bitrate;
       this->peer_connection->SetBitrate(bit_rate_setting);
   }

创建Audio/VideoSource

这一步我们需要使用PeerConnectionFactory的API来创建Audio/VideoSource。在创建AudioSource时,我可以指定一些音频参数,而在创建VideoSource时,我们要指定一个VideoCapturer。值得一提的是,需要在SignallingThread创建VideoCapturer

   ...
   //创建Audio/VideoSource
   audio_source = rtc->CreateAudioSource(GetAudioOptions());
   video_source = rtc->CreateVideoSource(rtc->CreateFakeVideoCapturerInSignalingThread());
   ...

   //获取默认Audio Configurations
   cricket::AudioOptions PeerConnection::GetAudioOptions() {
       cricket::AudioOptions options;
       options.audio_jitter_buffer_fast_accelerate = absl::optional<bool>(true);
       options.audio_jitter_buffer_max_packets = absl::optional<int>(10);
       options.echo_cancellation = absl::optional<bool>(false);
       options.auto_gain_control = absl::optional<bool>(false);
       options.noise_suppression = absl::optional<bool>(false);
       options.highpass_filter = absl::optional<bool>(false);
       options.stereo_swapping = absl::optional<bool>(false);
       options.typing_detection = absl::optional<bool>(false);
       options.experimental_agc = absl::optional<bool>(false);
       options.extended_filter_aec = absl::optional<bool>(false);
       options.delay_agnostic_aec = absl::optional<bool>(false);
       options.experimental_ns = absl::optional<bool>(false);
       options.residual_echo_detector = absl::optional<bool>(false);
       options.audio_network_adaptor = absl::optional<bool>(true);
       return options;
   }

   //创建AudioSource
   rtc::scoped_refptr<webrtc::AudioSourceInterface> RTC::CreateAudioSource(const cricket::AudioOptions &options) {
       return peer_connection_factory->CreateAudioSource(options);
   }

   //在SignalingThread创建VideoCapturer
   FakeVideoCapturer *RTC::CreateFakeVideoCapturerInSignalingThread() {
       if (video_capturer) {
           return signaling_thread->Invoke<FakeVideoCapturer *>(RTC_FROM_HERE,
                                                                rtc::Bind(&RTC::CreateFakeVideoCapturer, this,
                                                                          video_capturer));
       } else {
           return nullptr;
       }
   }

创建Audio/VideoTrack

这一步相对来说就很简单了,以上一步创建的Source作为参数,加个名字就能创建出Audio/VideoTrack。这个接口同样也是PeerConnectionFactory的。

   ...
   //创建Audio/VideoTrack
   video_track = rtc->CreateVideoTrack("video_track", video_source.get());
   audio_track = rtc->CreateAudioTrack("audio_track", audio_source);
   ...

   //创建VideoTrack
   rtc::scoped_refptr<webrtc::VideoTrackSourceInterface> RTC::CreateVideoSource(cricket::VideoCapturer *capturer) {
       return peer_connection_factory->CreateVideoSource(capturer);
   }

   //创建AudioTrack
   rtc::scoped_refptr<webrtc::VideoTrackInterface> RTC::CreateVideoTrack(const std::string &label,
                                                                         webrtc::VideoTrackSourceInterface *source) {
       return peer_connection_factory->CreateVideoTrack(label, source);
   }

创建LocalMediaStream

调用PeerConnectionFactory的API创建LocalMediaStream,并将之前的Audio/VideoTrack添加到该Stream中,最后将其添加到PeerConnection中。

   ...
   //创建LocalMediaStream
   transport_stream = rtc->CreateLocalMediaStream("stream");
   //添加Audio/VideoTrack
   transport_stream->AddTrack(video_track);
   transport_stream->AddTrack(audio_track);
   //添加Stream到PeerConnection
   peer_connection->AddStream(transport_stream);
   ...

创建Data Channel

创建Data Channel的过程相比于前面创建音视频传输的过程就简单多了,调用一个PeerConnection的API就创建出来了,在创建的时候可以指令一些配置项,主要是用来约束该Data Channel的可靠性。需要注意的是,一个Data Channel在客户端这里会有两个对象一个代表本地端,一个代表远端,本地端的DataChannel对象通过CreateDataChannel获得,远端的DataChannel通过PeerConnection的OnDataChannel回调获得。当需要发送数据时,调用DataChannel的Send接口,当远端发送数据过来时,会触发OnMessage的回调函数。

   //创建Data Channel
   DataChannel *
   PeerConnection::CreateDataChannel(std::string label, webrtc::DataChannelInit config, DataChannelObserver *observer) {
       rtc::scoped_refptr<webrtc::DataChannelInterface> data_channel = peer_connection->CreateDataChannel(label, &config);
       data_channel->RegisterObserver(observer);
       return new DataChannel(data_channel, observer);
   }

   //可配置内容
   struct DataChannelInit {
     // Deprecated. Reliability is assumed, and channel will be unreliable if
     // maxRetransmitTime or MaxRetransmits is set.
     bool reliable = false;

     // True if ordered delivery is required.
     bool ordered = true;

     // The max period of time in milliseconds in which retransmissions will be
     // sent. After this time, no more retransmissions will be sent. -1 if unset.
     //
     // Cannot be set along with |maxRetransmits|.
     int maxRetransmitTime = -1;

     // The max number of retransmissions. -1 if unset.
     //
     // Cannot be set along with |maxRetransmitTime|.
     int maxRetransmits = -1;

     // This is set by the application and opaque to the WebRTC implementation.
     std::string protocol;

     // True if the channel has been externally negotiated and we do not send an
     // in-band signalling in the form of an "open" message. If this is true, |id|
     // below must be set; otherwise it should be unset and will be negotiated
     // in-band.
     bool negotiated = false;

     // The stream id, or SID, for SCTP data channels. -1 if unset (see above).
     int id = -1;
   };

   //发送数据
   void DataChannel::Send(webrtc::DataBuffer &data_buffer) {
       data_channel->Send(data_buffer);
   }

   // Message received.
   void OnMessage(const webrtc::DataBuffer &buffer) override {
       //C++回调Java时需要将当前线程Attach到一个Java线程上
       JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
       jbyteArray jbyte_array = CHAR_POINTER_2_J_BYTE_ARRAY(env, buffer.data.cdata(),
                                                            static_cast<int>(buffer.data.size()));
       jclass data_buffer = GET_DATA_BUFFER_CLASS();
       jmethodID init_method = env->GetMethodID(data_buffer, "<init>", "([BZ)V");
       jobject data_buffer_object = env->NewObject(data_buffer, init_method,
                                                   jbyte_array,
                                                   buffer.binary);
       jclass observer_class = env->GetObjectClass(java_observer);
       jmethodID java_event_method = env->GetMethodID(observer_class, "onMessage",
                                                      "(Lpackage/name/of/rtc4j/model/DataBuffer;)V");
       //找到对应的回调函数,并执行该函数
       env->CallVoidMethod(java_observer, java_event_method, data_buffer_object);
       //释放相关引用
       env->ReleaseByteArrayElements(jbyte_array, env->GetByteArrayElements(jbyte_array, nullptr), JNI_ABORT);
       env->DeleteLocalRef(data_buffer_object);
       env->DeleteLocalRef(observer_class);
   }

   //Attach c++线程到Java线程
   JNIEnv *ATTACH_CURRENT_THREAD_IF_NEEDED() {
       JNIEnv *jni = GetEnv();
       if (jni)
           return jni;
       JavaVMAttachArgs args;
       args.version = JNI_VERSION_1_8;
       args.group = nullptr;
       args.name = const_cast<char *>("JNI-RTC");
   // Deal with difference in signatures between Oracle's jni.h and Android's.
   #ifdef _JavaSOFT_JNI_H_  // Oracle's jni.h violates the JNI spec!
       void *env = nullptr;
   #else
       JNIEnv* env = nullptr;
   #endif
       RTC_CHECK(!g_java_vm->AttachCurrentThread(&env, &args)) << "Failed to attach thread";
       RTC_CHECK(env) << "AttachCurrentThread handed back NULL!";
       jni = reinterpret_cast<JNIEnv *>(env);
       return jni;
   }

   JNIEnv *GetEnv() {
       void *env = nullptr;
       jint status = g_java_vm->GetEnv(&env, JNI_VERSION_1_8);
       RTC_CHECK(((env != nullptr) && (status == JNI_OK)) ||
                 ((env == nullptr) && (status == JNI_EDETACHED)))
           << "Unexpected GetEnv return: " << status << ":" << env;
       return reinterpret_cast<JNIEnv *>(env);
   }

   //Detach 当前C++线程对应的Java线程
   void DETACH_CURRENT_THREAD_IF_NEEDED() {
       // This function only runs on threads where |g_jni_ptr| is non-NULL, meaning
       // we were responsible for originally attaching the thread, so are responsible
       // for detaching it now.  However, because some JVM implementations (notably
       // Oracle's http://goo.gl/eHApYT) also use the pthread_key_create mechanism,
       // the JVMs accounting info for this thread may already be wiped out by the
       // time this is called. Thus it may appear we are already detached even though
       // it was our responsibility to detach!  Oh well.
       if (!GetEnv())
           return;
       jint status = g_java_vm->DetachCurrentThread();
       RTC_CHECK(status == JNI_OK) << "Failed to detach thread: " << status;
       RTC_CHECK(!GetEnv()) << "Detaching was a successful no-op???";
   }

在这一步中,我引入了一些关于Attach Thread和Detach Thread的相关内容,我觉得有必要进行简单的解释。之前我们提过,在WebRTC中会有三个主要线程,Worker Thread,Network Thread,Signaling Thread,其中WebRTC的回调都是通过Worker Thread来执行的。
而这个Worker Thread是我们用C++代码创建的独立线程,这类线程不像Java调用C++代码那样能简单容易得获取到JNIEnv,举个例子:
比如如下代码:

   public class Widget {
   private native void nativeMethod();
   }

他生成的Native头文件里对应的函数声明是这个样子:

   JNIEXPORT void JNICALL
   Java_xxxxx_nativeMethod(JNIEnv *env, jobject instance);

我们可以看到,这个函数声明中第一个参数就是JNIEnv,我们可以通过它以类似反射的形式调用Java中的函数代码。而C++中独立创建的线程,是没有JNIEnv与之对应的,对于这些线程,如果你想要在其中调用Java代码,就必须先通过JavaVM::AttachCurrentThread,将其Attach到一个Java线程上去,然后就能获得一个JNIEnv。
需要注意的是对于一个已经绑定到JavaVM上的线程调用AttachCurrentThread不会有任何影响。如果你的线程已经绑定到了JavaVM上,你还可以通过调用JavaVM::GetEnv获取 JNIEnv,如果你的线程没有绑定,这个函数返回JNI_EDETACHED。最后当我们不再需要该线程调用Java代码时,需要调用DetachCurrentThread来释放。

PeerConnection建立连接

从上一步Stream加入到PeerConnection之后,剩下的工作就是如何利用PeerConnection的API和回调函数与其他客户端建立起连接了。这一步中主要涉及的API就是CreateOffer,CreateAnswer,SetLocalDescription, SetRemoteDescription。在调用CreateOffer,CreateAnswer时,我们需要指定当前客户端是否接受另一客户端的Audio/Video,而在我的使用场景中只会出现Java服务器给其他客户端推送音视频数据这种情况,所以我在使用的时候ReceiveAudio/Video均为false。

   void PeerConnection::CreateAnswer(jobject java_observer) {
       create_session_observer->SetGlobalJavaObserver(java_observer, "answer");
       auto options = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions();
       options.offer_to_receive_audio = false;
       options.offer_to_receive_video = false;
       peer_connection->CreateAnswer(create_session_observer, options);
   }

   void PeerConnection::CreateOffer(jobject java_observer) {
       create_session_observer->SetGlobalJavaObserver(java_observer, "offer");
       auto options = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions();
       options.offer_to_receive_audio = false;
       options.offer_to_receive_video = false;
       peer_connection->CreateOffer(create_session_observer, options);
   }

   webrtc::SdpParseError PeerConnection::SetLocalDescription(JNIEnv *env, jobject sdp) {
       webrtc::SdpParseError error;
       webrtc::SessionDescriptionInterface *session_description(
               webrtc::CreateSessionDescription(GET_STRING_FROM_OBJECT(env, sdp, const_cast<char *>("type")),
                                                GET_STRING_FROM_OBJECT(env, sdp, const_cast<char *>("sdp")), &error));
       peer_connection->SetLocalDescription(set_session_description_observer, session_description);
       return error;
   }

   webrtc::SdpParseError PeerConnection::SetRemoteDescription(JNIEnv *env, jobject sdp) {
       webrtc::SdpParseError error;
       webrtc::SessionDescriptionInterface *session_description(
               webrtc::CreateSessionDescription(GET_STRING_FROM_OBJECT(env, sdp, const_cast<char *>("type")),
                                                GET_STRING_FROM_OBJECT(env, sdp, const_cast<char *>("sdp")), &error));
       peer_connection->SetRemoteDescription(set_session_description_observer, session_description);
       return error;
   }

在Java端一般来说我都是以如下方式交换SDP:

   //添加Stream到PeerConnection之后
   sessionRTCMap.get(headerAccessor.getSessionId()).getPeerConnection().createOffer(sdp -> executor.submit(() -> {
       try {
           sessionRTCMap.get(headerAccessor.getSessionId()).getPeerConnection().setLocalDescription(sdp);
           sendMessage(headerAccessor.getSessionId(), SDP_DESTINATION, sdp);
       } catch (Exception e) {
           log.error("{}", e);
       }
   }));

   //接收到远端传过来的Answer SDP之后
   SessionDescription sessionDescription = JSON.parseObject((String) requestResponse.getData(), SessionDescription.class);
   sessionRTCMap.get(headerAccessor.getSessionId()).getPeerConnection().setRemoteDescription(sessionDescription);

走到这一步,正常来说,整个连接就已经连通了。接下来我会讲一下我是如何释放所有相关资源,作为正常使用场景的完结。这个部分也有不少坑,我当时由于对WebRTC指针管理机制的不熟悉,频繁出现泄露问题和操作非法指针问题,说出来都是泪啊T.T。

释放所有相关资源

我们以Java中的释放过程作为起点,来浏览一下整个资源释放的过程。

   public void releaseResource() {
       lock.lock();
       try {
           //
           if (videoDataChannel != null) { //如果有使用DataChannel,先释放远端的DataChannel对象
               videoDataChannel.close();
               videoDataChannel = null;
           }
           log.info("Release remote video data channel");
           if (localVideoDataChannel != null) { //如果有使用DataChannel,接着释放本地的DataChannel对象
               localVideoDataChannel.close();
               localVideoDataChannel = null;
           }
           log.info("Release local video data channel");
           if (peerConnection != null) { //释放PeerConnection对象
               peerConnection.close();
               peerConnection = null;
           }
           log.info("Release peer connection");
           if (rtc != null) { //释放PeerConnectFactory相关对象
               rtc.close();
           }
           log.info("Release rtc");
       } catch (Exception ignored) {
       }finally {
           destroyed = true;
           lock.unlock();
       }
   }

然后是C++的相关释放代码:

   DataChannel::~DataChannel() {
       data_channel->UnregisterObserver(); //先解除注册进去的观察者
       delete data_channel_observer; //销毁观察者对象
       data_channel->Close(); //关闭Data Channel
       //rtc::scoped_refptr<webrtc::DataChannelInterface> data_channel; (Created by webrtc::PeerConnectionInterface::CreateDataChannel)
       data_channel = nullptr; //销毁Data Channel对象(计数指针)
   }

   PeerConnection::~PeerConnection() {
       peer_connection->Close(); //关闭PeerConnection
       //rtc::scoped_refptr<webrtc::PeerConnectionInterface> peer_connection; (Created by webrtc::PeerConnectionFactoryInterface::CreatePeerConnection)
       peer_connection = nullptr; //销毁PeerConnection对象(计数指针)
       delete peer_connection_observer; //销毁使用过的观察者
       delete set_session_description_observer; //销毁使用过的观察者
       delete create_session_observer; //销毁使用过的观察者
   }

   RTC::~RTC() {
       //rtc::scoped_refptr<webrtc::PeerConnectionFactoryInterface> peer_connection_factory; (Created by webrtc::CreatePeerConnectionFactory)
       peer_connection_factory = nullptr; //释放PeerConnectionFactory
       WEBRTC_LOG("Destroy peer connection factory", INFO);
       worker_thread->Invoke<void>(RTC_FROM_HERE, rtc::Bind(&RTC::ReleaseAudioDeviceModule, this)); //在Worker Thread释放AudioDeviceModule,因为是在这个线程创建的
       signaling_thread->Invoke<void>(RTC_FROM_HERE, rtc::Bind(&RTC::DetachCurrentThread, this)); //Detach signalling thread
       worker_thread->Invoke<void>(RTC_FROM_HERE, rtc::Bind(&RTC::DetachCurrentThread, this)); //Detach worker thread
       network_thread->Invoke<void>(RTC_FROM_HERE, rtc::Bind(&RTC::DetachCurrentThread, this)); //Detach network thread
       worker_thread->Stop(); //停止线程
       signaling_thread->Stop(); //停止线程
       network_thread->Stop(); //停止线程
       worker_thread.reset(); //销毁线程(计数指针)
       signaling_thread.reset(); //销毁线程(计数指针)
       network_thread.reset(); //销毁线程(计数指针)
       network_manager = nullptr; //销毁Network Manager(计数指针)
       socket_factory = nullptr; //销毁Socket Factory(计数指针)
       WEBRTC_LOG("Stop threads", INFO);
       if (video_capturer) {
           JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
           env->DeleteGlobalRef(video_capturer); //销毁对VideoCapturer的Java对象引用,这个对象是我保存在RTC类下的全局引用env->NewGlobalRef(video_capturer)
           //这里没有销毁AudioCapturer的Java引用是因为我将其引用保存在AudioDeviceModule中了
       }
   }

至此,如果您只会涉及到正常WebRTC使用场景的话,那么我想您已经掌握了如何在Java中调用WebRTC的Native APIs。接下来的部分,是我针对业务场景进行的一些API改动,如果您对这部分也感兴趣,就请听我慢慢道来。

经验分享

这里分享一点经验,所以我当时在进行这部分开发的时候,先是参考Javascript中WebRTC的使用,简单的熟悉了一下Native APIs,此外还参考了NodeJS的实现,遇到了问题就去Google的论坛WebRTC-Discuss,如果上述流程均没找到解决方案,就针对想要实现的功能走读所有相关代码=。=。

参考内容

[1] JNI的替代者—使用JNA访问Java外部功能接口
[2] Linux共享对象之编译参数fPIC
[3] Android JNI 使用总结
[4] FFmpeg 仓库

上一篇:WebRTC带宽估计


下一篇:用 WebRTC 打造一个音乐教育 App,要解决哪些音质难题?