FFMPEG解码多线程

FFMPEG多线程编码器一般以在Slice内分功能模块进行多线程编码,如h263,h263P,msmpeg(v1, v2, v3),wmv1。包含以下几个线程:(1)Pre_estimation_motion_thread运动估计前的准备;(2)Estimation_motion_thread运动估计;(3)Mb_var_thread宏块其他变量;(4)Encode_thread编码主线程。当然也有例外,如FFV1编码器按Slice为线程单位进行多线程编码。

FFMPEG多线程解码器分为Frame级和Slice级两种,Slice级多线程同时解码一帧中不同的部分。Frame级多线程同时接受多帧码流,实现并行解码,当前帧处于显示状态时,未来的几帧已经在其他线程中被解码。

1. Slice Threading

FFmpeg中,dvvideo_decoder, ffv1_decoder, h264_decoder, mpeg2_video_decoder和mpeg_video_decoder均支持了Slice Threading。

实现方法是:首先为codecContext注册注册多线程处理函数excute(),Codec解码过程中处理Slice时调用avctx->excute()。excute()启动Slice解码工作线程开始多线程解码,同时快速返回开始下一Slice的解析和解码。

Frame Threading主线程和解码线程的同步如图1所示。

FFMPEG解码多线程

图1 Frame Threading主线程和解码线程的同步

2. Frame Threading

目前为止支持Frame Threading的解码器有h264_decoder, huffyuv_decoder, ffvhuff_decoder, mdec_decoder, mimic_decoder, mpeg4_decoder, theora_decoder, vp3_decoder和vp8_decoder。

Frame Threading有如下限制:用户函数draw_horiz_band()必须是线程安全的;为了提升性能,用户应该为codec提供线程安全的get_buffer()回调函数;用户必须能处理多线程带来的延时。另外,支持Frame Threading的codec要求每个包包含一个完整帧。Buffer内容在ff_thread_await_progress()调用之前不能读,同样,包括加边draw_edges()在内的处理,在ff_thread_report_progress()调用之后,Buffer内容不能写。

每个线程都有以下四个状态。如图2所示,为了保证线程安全,若Codec未实现update_thread_context()和线程安全的get_buffer(),则必须在解码完成后才能将状态转换为STATUS_SETUP_FINISHED,意味着下一个线程只能在当前线程解码完成后才能开始解码。

而如图3所示,如果Codec实现update_thread_context()和线程安全的get_buffer(),线程状态可以在解码开始之前转换为STATUS_SETUP_FINISHED,这样,下一个线程就可能与当前线程并行。

FFMPEG解码多线程

图2 Codec未实现update_thread_context()和线程安全的get_buffer(),线程状态转换

FFMPEG解码多线程

图3 Codec实现update_thread_context()和线程安全的get_buffer(),线程状态转换

解码主线程通过调用submit_packet()将码流交给对应的解码线程。主线程和解码线程的同步如图4所示。

FFMPEG解码多线程

图4 Frame Threading主线程和解码线程的同步

 

/**
 * Set the threading algorithms used.//设置线程的使用算法
 * Threading requires more than one thread.//需要一个以上的线程
 * Frame threading requires entire frames to be passed to the codec,//帧线程需要整个帧被传递到编码解码器
 * and introduces extra decoding delay, so is incompatible with low_delay.// 并引入了额外的解码延迟,所以是不符合low_delay
 * @param avctx The context.
 */

 //有效的线程参数

static void validate_thread_parameters(AVCodecContext *avctx)
{

    //实现帧线程支持,需要在配置codec的时候设置codec的capabilities,flags,flags2
    int frame_threading_supported = (avctx->codec->capabilities & CODEC_CAP_FRAME_THREADS)
                                && !(avctx->flags & CODEC_FLAG_TRUNCATED)
                                && !(avctx->flags & CODEC_FLAG_LOW_DELAY)
                                && !(avctx->flags2 & CODEC_FLAG2_CHUNKS);
    if (avctx->thread_count == 1) {    //多线程要在双核以上的机器上才行
        avctx->active_thread_type = 0;
    } else if (frame_threading_supported && (avctx->thread_type & FF_THREAD_FRAME)) {//  在codec初始化的时候设置avctx->thread_type |=FF_THREAD_FRAME
        avctx->active_thread_type = FF_THREAD_FRAME;   //线程的类型设置为帧并行
    } else if (avctx->codec->capabilities & CODEC_CAP_SLICE_THREADS &&
               avctx->thread_type & FF_THREAD_SLICE) {  //同样的要实现片级并行则需要在codec初始化的时候设置条件
        avctx->active_thread_type = FF_THREAD_SLICE;
    } else if (!(avctx->codec->capabilities & CODEC_CAP_AUTO_THREADS)) {
        avctx->thread_count       = 1;
        avctx->active_thread_type = 0;
    }

    if (avctx->thread_count > MAX_AUTO_THREADS)
        av_log(avctx, AV_LOG_WARNING,
               "Application has requested %d threads. Using a thread count greater than %d is not recommended.\n",
               avctx->thread_count, MAX_AUTO_THREADS);
}
 

注释:

如果不知到帧与片 可以查看之前的转载文章  http://blog.csdn.net/jwzhangjie/article/details/8739139 

//解码函数

int attribute_align_arg avcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture,

                                              int *got_picture_ptr,
                                              const AVPacket *avpkt)
{
    AVCodecInternal *avci = avctx->internal;
    int ret;
    // copy to ensure we do not change avpkt
    AVPacket tmp = *avpkt;

    if (avctx->codec->type != AVMEDIA_TYPE_VIDEO) {   //如果不是视频类型,则退出
        av_log(avctx, AV_LOG_ERROR, "Invalid media type for video\n");
        return AVERROR(EINVAL);
    }

    *got_picture_ptr = 0;  /
    if ((avctx->coded_width || avctx->coded_height) && av_image_check_size(avctx->coded_width, avctx->coded_height, 0, avctx))
        return AVERROR(EINVAL);

    avcodec_get_frame_defaults(picture);

    if (!avctx->refcounted_frames)
        av_frame_unref(&avci->to_free);

//如果是帧并行解码的话

//需要在codec初始化的时候设置codec->capabilities |= CODEC_CAP_DELAY,查看注释,设置avctx->active_thread_type |= FF_THREAD_FRAME

    if ((avctx->codec->capabilities & CODEC_CAP_DELAY) || avpkt->size || (avctx->active_thread_type & FF_THREAD_FRAME)) {
        int did_split = av_packet_split_side_data(&tmp);
        apply_param_change(avctx, &tmp);

        avctx->pkt = &tmp;

//需要配置configure  --enable-pthreads(我现在最新的默认生成的config.h就是#define  HAVE_THREADS 1)  ,设置avctx->active_thread_type |= FF_THREAD_FRAME

        if (HAVE_THREADS && avctx->active_thread_type & FF_THREAD_FRAME)
            ret = ff_thread_decode_frame(avctx, picture, got_picture_ptr,&tmp);//进入帧级并行解码
        else {//进入单线程解码
            ret = avctx->codec->decode(avctx, picture, got_picture_ptr, &tmp);
            picture->pkt_dts = avpkt->dts;

            if(!avctx->has_b_frames){
                av_frame_set_pkt_pos(picture, avpkt->pos);
            }
            //FIXME these should be under if(!avctx->has_b_frames)
            /* get_buffer is supposed to set frame parameters */
            if (!(avctx->codec->capabilities & CODEC_CAP_DR1)) {
                if (!picture->sample_aspect_ratio.num)    picture->sample_aspect_ratio = avctx->sample_aspect_ratio;
                if (!picture->width)                      picture->width               = avctx->width;
                if (!picture->height)                     picture->height              = avctx->height;
                if (picture->format == AV_PIX_FMT_NONE)   picture->format              = avctx->pix_fmt;
            }
        }
        add_metadata_from_side_data(avctx, picture);

        emms_c(); //needed to avoid an emms_c() call before every return;

        avctx->pkt = NULL;
        if (did_split) {
            ff_packet_free_side_data(&tmp);
            if(ret == tmp.size)
                ret = avpkt->size;
        }

        if (ret < 0 && picture->data[0])
            av_frame_unref(picture);

        if (*got_picture_ptr) {
            if (!avctx->refcounted_frames) {
                avci->to_free = *picture;
                avci->to_free.extended_data = avci->to_free.data;
            }

            avctx->frame_number++;
            av_frame_set_best_effort_timestamp(picture,
                                               guess_correct_pts(avctx,
                                                                 picture->pkt_pts,
                                                                 picture->pkt_dts));
        }
    } else
        ret = 0;

    /* many decoders assign whole AVFrames, thus overwriting extended_data;

     * make sure it's set correctly */

//许多解码器分配整个的AVFrames ,从而覆盖extended_data相关

    picture->extended_data = picture->data;

    return ret;
}

 

注释:

/**
 * Encoder or decoder requires flushing with NULL input at the end in order to
 * give the complete and correct output.
 *
 * NOTE: If this flag is not set, the codec is guaranteed to never be fed with
 *       with NULL data. The user can still send NULL data to the public encode
 *       or decode function, but libavcodec will not pass it along to the codec
 *       unless this flag is set.
 *
 * Decoders:
 * The decoder has a non-zero delay and needs to be fed with avpkt->data=NULL,
 * avpkt->size=0 at the end to get the delayed data until the decoder no longer
 * returns frames.
 *
 * Encoders:
 * The encoder needs to be fed with NULL data at the end of encoding until the
 * encoder no longer returns data.
 *
 * NOTE: For encoders implementing the AVCodec.encode2() function, setting this
 *       flag also means that the encoder must set the pts and duration for
 *       each output packet. If this flag is not set, the pts and duration will
 *       be determined by libavcodec from the input frame.
 */

#define CODEC_CAP_DELAY           0x0020

 

//帧级解码函数,在avcodec_open2的时候,就会判断片还是帧解码,分析见下一篇

int ff_thread_decode_frame(AVCodecContext *avctx,

                           AVFrame *picture, int *got_picture_ptr,
                           AVPacket *avpkt)
{
    FrameThreadContext *fctx = avctx->thread_opaque; //thread_opaque见注释1
    int finished = fctx->next_finished;//从返回结果输出的下一个内容
    PerThreadContext *p;//查看注释2
    int err;

    /*

     * Submit a packet to the next decoding thread.
*提交数据包发送到下一个解码线程

     */

    p = &fctx->threads[fctx->next_decoding];
    err = update_context_from_user(p->avctx, avctx);//将用户提交的avctx复制到p->avctx里面
    if (err) return err;
    err = submit_packet(p, avpkt);
    if (err) return err;

    /*

     * If we're still receiving the initial packets, don't return a frame.

     *如果我们仍然接收的初步数据包,不返回帧

*/


    if (fctx->delaying) {
        if (fctx->next_decoding >= (avctx->thread_count-1)) fctx->delaying = 0;

        *got_picture_ptr=0;
        if (avpkt->size)
            return avpkt->size;
    }

    /*
     * Return the next available frame from the oldest thread.//返回下一个可用帧从最早的线程
     * If we're at the end of the stream, then we have to skip threads that
     * didn't output a frame, because we don't want to accidentally signal
     * EOF (avpkt->size == 0 && *got_picture_ptr == 0).//因为我们不希望发生意外信号EOF,所以在流的最后,我们必须跳过线程,而不输出帧
     */

    do {
        p = &fctx->threads[finished++];

        if (p->state != STATE_INPUT_READY) {
            pthread_mutex_lock(&p->progress_mutex);
            while (p->state != STATE_INPUT_READY)//如果数据的状态不是STATE_INPUT_READY,线程进行等待,直到线程pthread_cond_signal(&p->progress_cond);
                pthread_cond_wait(&p->output_cond, &p->progress_mutex);
            pthread_mutex_unlock(&p->progress_mutex);
        }

        av_frame_move_ref(picture, &p->frame);
        *got_picture_ptr = p->got_frame;
        picture->pkt_dts = p->avpkt.dts;

        /*
         * A later call with avkpt->size == 0 may loop over all threads,
         * including this one, searching for a frame to return before being
         * stopped by the "finished != fctx->next_finished" condition.
         * Make sure we don't mistakenly return the same frame again.
         */
        p->got_frame = 0;

        if (finished >= avctx->thread_count) finished = 0;
    } while (!avpkt->size && !*got_picture_ptr && finished != fctx->next_finished);

    update_context_from_thread(avctx, p->avctx, 1);

    if (fctx->next_decoding >= avctx->thread_count) fctx->next_decoding = 0;

    fctx->next_finished = finished;

    /* return the size of the consumed packet if no error occurred */
    return (p->result >= 0) ? avpkt->size : p->result;
}
 

注释:

1.

/**
     * thread opaque
     * Can be used by execute() to store some per AVCodecContext stuff.

//能够使用execute()函数来存储每个AVCodecContext数据
     * - encoding: set by execute()
     * - decoding: set by execute()
     */
    void *thread_opaque;

2.

/**
 * Context used by codec threads and stored in their AVCodecContext thread_opaque.

 *上下文所使用的编解码器线程和存储在他们的AVCodecContext thread_opaque
 */
typedef struct PerThreadContext {
    struct FrameThreadContext *parent;

    pthread_t      thread;
    int            thread_init;
    pthread_cond_t input_cond;      ///< Used to wait for a new packet from the main thread.从主线程等待一个新的数据包。
    pthread_cond_t progress_cond;   ///< Used by child threads to wait for progress to change.
    pthread_cond_t output_cond;     ///< Used by the main thread to wait for frames to finish.用于由主线程等待帧来完成

    pthread_mutex_t mutex;          ///< Mutex used to protect the contents of the PerThreadContext.
    pthread_mutex_t progress_mutex; ///< Mutex used to protect frame progress values and progress_cond.

    AVCodecContext *avctx;          ///< Context used to decode packets passed to this thread.

    AVPacket       avpkt;           ///< Input packet (for decoding) or output (for encoding).
    uint8_t       *buf;             ///< backup storage for packet data when the input packet is not refcounted
    int            allocated_buf_size; ///< Size allocated for buf

    AVFrame frame;                  ///< Output frame (for decoding) or input (for encoding).
    int     got_frame;              ///< The output of got_picture_ptr from the last avcodec_decode_video() call.
    int     result;                 ///< The result of the last codec decode/encode() call.

    enum {   //这个可以看之前文章中提到的帧级解码状态图
        STATE_INPUT_READY,          ///< Set when the thread is awaiting a packet.
        STATE_SETTING_UP,           ///< Set before the codec has called ff_thread_finish_setup().
        STATE_GET_BUFFER,           /**<
                                     * Set when the codec calls get_buffer().
                                     * State is returned to STATE_SETTING_UP afterwards.
                                     */
        STATE_SETUP_FINISHED        ///< Set after the codec has called ff_thread_finish_setup().
    } state;

    /**
     * Array of frames passed to ff_thread_release_buffer().
     * Frames are released after all threads referencing them are finished.
     */
    AVFrame *released_buffers;
    int  num_released_buffers;
    int      released_buffers_allocated;

    AVFrame *requested_frame;       ///< AVFrame the codec passed to get_buffer()
    int      requested_flags;       ///< flags passed to get_buffer() for requested_frame
} PerThreadContext;

 

//在初始化codec后,接下来就是打开解码器
int attribute_align_arg avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options)
{
    int ret = 0;
    AVDictionary *tmp = NULL;

    if (avcodec_is_open(avctx))  //返回avctx-》internal,见注释1
        return 0;

    if ((!codec && !avctx->codec)) {
        av_log(avctx, AV_LOG_ERROR, "No codec provided to avcodec_open2()\n");
        return AVERROR(EINVAL);
    }
    if ((codec && avctx->codec && codec != avctx->codec)) {
        av_log(avctx, AV_LOG_ERROR, "This AVCodecContext was allocated for %s, "
                                    "but %s passed to avcodec_open2()\n", avctx->codec->name, codec->name);
        return AVERROR(EINVAL);
    }
    if (!codec)
        codec = avctx->codec;
//判断额外数据,在硬解吗h264的时候必须人为设置sps pps给extradata
    if (avctx->extradata_size < 0 || avctx->extradata_size >= FF_MAX_EXTRADATA_SIZE)
        return AVERROR(EINVAL);

    if (options)   //一般options设置NULL
        av_dict_copy(&tmp, *options, 0);

    ret = ff_lock_avcodec(avctx);
    if (ret < 0)
        return ret;

    avctx->internal = av_mallocz(sizeof(AVCodecInternal));
    if (!avctx->internal) {
        ret = AVERROR(ENOMEM);
        goto end;
    }

    avctx->internal->pool = av_mallocz(sizeof(*avctx->internal->pool));
    if (!avctx->internal->pool) {
        ret = AVERROR(ENOMEM);
        goto free_and_end;
    }

    if (codec->priv_data_size > 0) {
        if (!avctx->priv_data) {
            avctx->priv_data = av_mallocz(codec->priv_data_size);
            if (!avctx->priv_data) {
                ret = AVERROR(ENOMEM);
                goto end;
            }
            if (codec->priv_class) {
                *(const AVClass **)avctx->priv_data = codec->priv_class;
                av_opt_set_defaults(avctx->priv_data);
            }
        }
        if (codec->priv_class && (ret = av_opt_set_dict(avctx->priv_data, &tmp)) < 0)
            goto free_and_end;
    } else {
        avctx->priv_data = NULL;
    }//上面主要是分配内存
    if ((ret = av_opt_set_dict(avctx, &tmp)) < 0)
        goto free_and_end;

    // only call avcodec_set_dimensions() for non H.264/VP6F codecs so as not to overwrite previously setup dimensions
    if (!(avctx->coded_width && avctx->coded_height && avctx->width && avctx->height &&
          (avctx->codec_id == AV_CODEC_ID_H264 || avctx->codec_id == AV_CODEC_ID_VP6F))) {
    if (avctx->coded_width && avctx->coded_height)
        avcodec_set_dimensions(avctx, avctx->coded_width, avctx->coded_height);
    else if (avctx->width && avctx->height)
        avcodec_set_dimensions(avctx, avctx->width, avctx->height);
    }

    if ((avctx->coded_width || avctx->coded_height || avctx->width || avctx->height)
        && (  av_image_check_size(avctx->coded_width, avctx->coded_height, 0, avctx) < 0
           || av_image_check_size(avctx->width,       avctx->height,       0, avctx) < 0)) {
        av_log(avctx, AV_LOG_WARNING, "Ignoring invalid width/height values\n");
        avcodec_set_dimensions(avctx, 0, 0);
    }

    /* if the decoder init function was already called previously,如果解码器的init函数已经调用过
     * free the already allocated subtitle_header before overwriting it 释放已经分配subtitle_header ,在它覆盖前
*/
    if (av_codec_is_decoder(codec))
        av_freep(&avctx->subtitle_header);

    if (avctx->channels > FF_SANE_NB_CHANNELS) {
        ret = AVERROR(EINVAL);
        goto free_and_end;
    }

    avctx->codec = codec;
    if ((avctx->codec_type == AVMEDIA_TYPE_UNKNOWN || avctx->codec_type == codec->type) &&
        avctx->codec_id == AV_CODEC_ID_NONE) {
        avctx->codec_type = codec->type;
        avctx->codec_id   = codec->id;
    }
    if (avctx->codec_id != codec->id || (avctx->codec_type != codec->type
                                         && avctx->codec_type != AVMEDIA_TYPE_ATTACHMENT)) {
        av_log(avctx, AV_LOG_ERROR, "Codec type or id mismatches\n");
        ret = AVERROR(EINVAL);
        goto free_and_end;
    }
    avctx->frame_number = 0;
    avctx->codec_descriptor = avcodec_descriptor_get(avctx->codec_id);

    if (avctx->codec->capabilities & CODEC_CAP_EXPERIMENTAL &&
        avctx->strict_std_compliance > FF_COMPLIANCE_EXPERIMENTAL) {
        const char *codec_string = av_codec_is_encoder(codec) ? "encoder" : "decoder";
        AVCodec *codec2;
        av_log(NULL, AV_LOG_ERROR,
               "The %s '%s' is experimental but experimental codecs are not enabled, "
               "add '-strict %d' if you want to use it.\n",
               codec_string, codec->name, FF_COMPLIANCE_EXPERIMENTAL);
        codec2 = av_codec_is_encoder(codec) ? avcodec_find_encoder(codec->id) : avcodec_find_decoder(codec->id);
        if (!(codec2->capabilities & CODEC_CAP_EXPERIMENTAL))
            av_log(NULL, AV_LOG_ERROR, "Alternatively use the non experimental %s '%s'.\n",
                codec_string, codec2->name);
        ret = AVERROR_EXPERIMENTAL;
        goto free_and_end;
    }

    if (avctx->codec_type == AVMEDIA_TYPE_AUDIO &&
        (!avctx->time_base.num || !avctx->time_base.den)) {
        avctx->time_base.num = 1;
        avctx->time_base.den = avctx->sample_rate;
    }

    if (!HAVE_THREADS)
        av_log(avctx, AV_LOG_WARNING, "Warning: not compiled with thread support, using thread emulation\n");

    if (CONFIG_FRAME_THREAD_ENCODER) {
        ff_unlock_avcodec(); //we will instanciate a few encoders thus kick the counter to prevent false detection of a problem
        ret = ff_frame_thread_encoder_init(avctx, options ? *options : NULL); 
        ff_lock_avcodec(avctx);
        if (ret < 0)
            goto free_and_end;
    }
//帧线程编码
    if (HAVE_THREADS && !avctx->thread_opaque
        && !(avctx->internal->frame_thread_encoder && (avctx->active_thread_type&FF_THREAD_FRAME))) {
        ret = ff_thread_init(avctx);//会根据用户设置不同的变量来判断是片,帧,来初始化线程
        if (ret < 0) {
            goto free_and_end;
        }
    }
    if (!HAVE_THREADS && !(codec->capabilities & CODEC_CAP_AUTO_THREADS))
        avctx->thread_count = 1;

    if (avctx->codec->max_lowres < avctx->lowres || avctx->lowres < 0) {
        av_log(avctx, AV_LOG_ERROR, "The maximum value for lowres supported by the decoder is %d\n",
               avctx->codec->max_lowres);
        ret = AVERROR(EINVAL);
        goto free_and_end;

    }

//下面就会根据是编码器还是解码器来初始化


    if (av_codec_is_encoder(avctx->codec)) {
        int i;
        if (avctx->codec->sample_fmts) {
            for (i = 0; avctx->codec->sample_fmts[i] != AV_SAMPLE_FMT_NONE; i++) {
                if (avctx->sample_fmt == avctx->codec->sample_fmts[i])
                    break;
                if (avctx->channels == 1 &&
                    av_get_planar_sample_fmt(avctx->sample_fmt) ==
                    av_get_planar_sample_fmt(avctx->codec->sample_fmts[i])) {
                    avctx->sample_fmt = avctx->codec->sample_fmts[i];
                    break;
                }
            }
            if (avctx->codec->sample_fmts[i] == AV_SAMPLE_FMT_NONE) {
                char buf[128];
                snprintf(buf, sizeof(buf), "%d", avctx->sample_fmt);
                av_log(avctx, AV_LOG_ERROR, "Specified sample format %s is invalid or not supported\n",
                       (char *)av_x_if_null(av_get_sample_fmt_name(avctx->sample_fmt), buf));
                ret = AVERROR(EINVAL);
                goto free_and_end;
            }
        }
        if (avctx->codec->pix_fmts) {
            for (i = 0; avctx->codec->pix_fmts[i] != AV_PIX_FMT_NONE; i++)
                if (avctx->pix_fmt == avctx->codec->pix_fmts[i])
                    break;
            if (avctx->codec->pix_fmts[i] == AV_PIX_FMT_NONE
                && !((avctx->codec_id == AV_CODEC_ID_MJPEG || avctx->codec_id == AV_CODEC_ID_LJPEG)
                     && avctx->strict_std_compliance <= FF_COMPLIANCE_UNOFFICIAL)) {
                char buf[128];
                snprintf(buf, sizeof(buf), "%d", avctx->pix_fmt);
                av_log(avctx, AV_LOG_ERROR, "Specified pixel format %s is invalid or not supported\n",
                       (char *)av_x_if_null(av_get_pix_fmt_name(avctx->pix_fmt), buf));
                ret = AVERROR(EINVAL);
                goto free_and_end;
            }
        }
        if (avctx->codec->supported_samplerates) {
            for (i = 0; avctx->codec->supported_samplerates[i] != 0; i++)
                if (avctx->sample_rate == avctx->codec->supported_samplerates[i])
                    break;
            if (avctx->codec->supported_samplerates[i] == 0) {
                av_log(avctx, AV_LOG_ERROR, "Specified sample rate %d is not supported\n",
                       avctx->sample_rate);
                ret = AVERROR(EINVAL);
                goto free_and_end;
            }
        }
        if (avctx->codec->channel_layouts) {
            if (!avctx->channel_layout) {
                av_log(avctx, AV_LOG_WARNING, "Channel layout not specified\n");
            } else {
                for (i = 0; avctx->codec->channel_layouts[i] != 0; i++)
                    if (avctx->channel_layout == avctx->codec->channel_layouts[i])
                        break;
                if (avctx->codec->channel_layouts[i] == 0) {
                    char buf[512];
                    av_get_channel_layout_string(buf, sizeof(buf), -1, avctx->channel_layout);
                    av_log(avctx, AV_LOG_ERROR, "Specified channel layout '%s' is not supported\n", buf);
                    ret = AVERROR(EINVAL);
                    goto free_and_end;
                }
            }
        }
        if (avctx->channel_layout && avctx->channels) {
            int channels = av_get_channel_layout_nb_channels(avctx->channel_layout);
            if (channels != avctx->channels) {
                char buf[512];
                av_get_channel_layout_string(buf, sizeof(buf), -1, avctx->channel_layout);
                av_log(avctx, AV_LOG_ERROR,
                       "Channel layout '%s' with %d channels does not match number of specified channels %d\n",
                       buf, channels, avctx->channels);
                ret = AVERROR(EINVAL);
                goto free_and_end;
            }
        } else if (avctx->channel_layout) {
            avctx->channels = av_get_channel_layout_nb_channels(avctx->channel_layout);
        }
        if(avctx->codec_type == AVMEDIA_TYPE_VIDEO &&
           avctx->codec_id != AV_CODEC_ID_PNG // For mplayer
        ) {
            if (avctx->width <= 0 || avctx->height <= 0) {
                av_log(avctx, AV_LOG_ERROR, "dimensions not set\n");
                ret = AVERROR(EINVAL);
                goto free_and_end;
            }
        }
        if (   (avctx->codec_type == AVMEDIA_TYPE_VIDEO || avctx->codec_type == AVMEDIA_TYPE_AUDIO)
            && avctx->bit_rate>0 && avctx->bit_rate<1000) {
            av_log(avctx, AV_LOG_WARNING, "Bitrate %d is extremely low, maybe you mean %dk\n", avctx->bit_rate, avctx->bit_rate);
        }

        if (!avctx->rc_initial_buffer_occupancy)
            avctx->rc_initial_buffer_occupancy = avctx->rc_buffer_size * 3 / 4;
    }

    avctx->pts_correction_num_faulty_pts =
    avctx->pts_correction_num_faulty_dts = 0;
    avctx->pts_correction_last_pts =
    avctx->pts_correction_last_dts = INT64_MIN;

    if (   avctx->codec->init && (!(avctx->active_thread_type&FF_THREAD_FRAME)
        || avctx->internal->frame_thread_encoder)) {
        ret = avctx->codec->init(avctx);
        if (ret < 0) {
            goto free_and_end;
        }
    }
 

    ret=0;

 

//上面部分是判断codec是否是编码器,然后如果是编码器则进行选择编码的方式,这里我们只讨论解码,下面的解码器也是一样,根据不同的条件,进行编码初始化


    if (av_codec_is_decoder(avctx->codec)) {
        if (!avctx->bit_rate)
            avctx->bit_rate = get_bit_rate(avctx);//如果没有设置bit_rate则会更加avctx类型将avctx->bit_rate复制给
        /* validate channel layout from the decoder */
        if (avctx->channel_layout) {
            int channels = av_get_channel_layout_nb_channels(avctx->channel_layout);
            if (!avctx->channels)
                avctx->channels = channels;
            else if (channels != avctx->channels) {
                char buf[512];
                av_get_channel_layout_string(buf, sizeof(buf), -1, avctx->channel_layout);
                av_log(avctx, AV_LOG_WARNING,
                       "Channel layout '%s' with %d channels does not match specified number of channels %d: "
                       "ignoring specified channel layout\n",
                       buf, channels, avctx->channels);
                avctx->channel_layout = 0;
            }
        }
        if (avctx->channels && avctx->channels < 0 ||
            avctx->channels > FF_SANE_NB_CHANNELS) {
            ret = AVERROR(EINVAL);
            goto free_and_end;
        }
        if (avctx->sub_charenc) {
            if (avctx->codec_type != AVMEDIA_TYPE_SUBTITLE) {
                av_log(avctx, AV_LOG_ERROR, "Character encoding is only "
                       "supported with subtitles codecs\n");
                ret = AVERROR(EINVAL);
                goto free_and_end;
            } else if (avctx->codec_descriptor->props & AV_CODEC_PROP_BITMAP_SUB) {
                av_log(avctx, AV_LOG_WARNING, "Codec '%s' is bitmap-based, "
                       "subtitles character encoding will be ignored\n",
                       avctx->codec_descriptor->name);
                avctx->sub_charenc_mode = FF_SUB_CHARENC_MODE_DO_NOTHING;
            } else {
                /* input character encoding is set for a text based subtitle
                 * codec at this point */
                if (avctx->sub_charenc_mode == FF_SUB_CHARENC_MODE_AUTOMATIC)
                    avctx->sub_charenc_mode = FF_SUB_CHARENC_MODE_PRE_DECODER;

                if (avctx->sub_charenc_mode == FF_SUB_CHARENC_MODE_PRE_DECODER) {
#if CONFIG_ICONV
                    iconv_t cd = iconv_open("UTF-8", avctx->sub_charenc);
                    if (cd == (iconv_t)-1) {
                        av_log(avctx, AV_LOG_ERROR, "Unable to open iconv context "
                               "with input character encoding \"%s\"\n", avctx->sub_charenc);
                        ret = AVERROR(errno);
                        goto free_and_end;
                    }
                    iconv_close(cd);
#else
                    av_log(avctx, AV_LOG_ERROR, "Character encoding subtitles "
                           "conversion needs a libavcodec built with iconv support "
                           "for this codec\n");
                    ret = AVERROR(ENOSYS);
                    goto free_and_end;
#endif
                }
            }
        }
    }
end:
    ff_unlock_avcodec();
    if (options) {
        av_dict_free(options);
        *options = tmp;
    }

    return ret;
free_and_end:
    av_dict_free(&tmp);
    av_freep(&avctx->priv_data);
    if (avctx->internal)
        av_freep(&avctx->internal->pool);
    av_freep(&avctx->internal);
    avctx->codec = NULL;
    goto end;
}
 

注释:

1. /**
     * Private context used for internal data.
     *私有的上下文内容用于内部数据
     * Unlike priv_data, this is not codec-specific. It is used in general
     * libavcodec functions.

*不像priv_data ,这是不特定的编解码器。这是用在一般libavcodec的功能
     */
    struct AVCodecInternal *internal;

 

//用来判断是帧还是片线程初始化
int ff_thread_init(AVCodecContext *avctx)
{
    if (avctx->thread_opaque) {
        av_log(avctx, AV_LOG_ERROR, "avcodec_thread_init is ignored after avcodec_open\n");
        return -1;
    }

#if HAVE_W32THREADS
    w32thread_init();
#endif

    if (avctx->codec) {
        //有效的线程参数,将会设置avctx->active_thread_type,进而判断进入下面if的哪个条件
        validate_thread_parameters(avctx);//查看FFmepg 多线程解码历程 - 1:validate_thread_parameters 讲解
        
        if (avctx->active_thread_type&FF_THREAD_SLICE)
            return thread_init(avctx);
        else if (avctx->active_thread_type&FF_THREAD_FRAME)
            return frame_thread_init(avctx);  //下一篇讲解frame_thread_init的具体内容
    }

    return 0;
}

//ff_thread_init选择帧线程初始化,就会进入frame_thread_init

static int frame_thread_init(AVCodecContext *avctx)

{
    int thread_count = avctx->thread_count;
    const AVCodec *codec = avctx->codec;
    AVCodecContext *src = avctx;
    FrameThreadContext *fctx;
    int i, err = 0;
//如果我们在初始化codec的时候没有设置thread_count或者设置为0,则就会自动根据手机设备来获取
    if (!thread_count) {
        int nb_cpus = ff_get_logical_cpus(avctx);
        if ((avctx->debug & (FF_DEBUG_VIS_QP | FF_DEBUG_VIS_MB_TYPE)) || avctx->debug_mv)
            nb_cpus = 1;
        // use number of cores + 1 as thread count if there is more than one
        if (nb_cpus > 1)
            thread_count = avctx->thread_count = FFMIN(nb_cpus + 1, MAX_AUTO_THREADS);  //#define FFMIN(a,b) ((a) > (b) ? (b) : (a))
        else
            thread_count = avctx->thread_count = 1;
    }

    if (thread_count <= 1) {  //如果thread_count <= 1那么即使进入帧级线程初始化,在解码的时候也不会调用帧级解码函数的
        avctx->active_thread_type = 0;
        return 0;
    }

    avctx->thread_opaque = fctx = av_mallocz(sizeof(FrameThreadContext));

    fctx->threads = av_mallocz(sizeof(PerThreadContext) * thread_count);  //相当与初始化线程池,个数就是thread_count
    pthread_mutex_init(&fctx->buffer_mutex, NULL);
    fctx->delaying = 1;

    for (i = 0; i < thread_count; i++) {
        AVCodecContext *copy = av_malloc(sizeof(AVCodecContext));
        PerThreadContext *p  = &fctx->threads[i];

        pthread_mutex_init(&p->mutex, NULL);
        pthread_mutex_init(&p->progress_mutex, NULL);
        pthread_cond_init(&p->input_cond, NULL);
        pthread_cond_init(&p->progress_cond, NULL);
        pthread_cond_init(&p->output_cond, NULL);

        p->parent = fctx;
        p->avctx  = copy;

        if (!copy) {
            err = AVERROR(ENOMEM);
            goto error;
        }

        *copy = *src;
        copy->thread_opaque = p;
        copy->pkt = &p->avpkt;

        if (!i) {
            src = copy;

            if (codec->init)
                err = codec->init(copy);
        //更新下一个线程的AVCodecContext参考线程的上下文中的值
            update_context_from_thread(avctx, copy, 1);
        } else {
            copy->priv_data = av_malloc(codec->priv_data_size);
            if (!copy->priv_data) {
                err = AVERROR(ENOMEM);
                goto error;
            }
            memcpy(copy->priv_data, src->priv_data, codec->priv_data_size);
            copy->internal = av_malloc(sizeof(AVCodecInternal));
            if (!copy->internal) {
                err = AVERROR(ENOMEM);
                goto error;
            }
            *copy->internal = *src->internal;
            copy->internal->is_copy = 1;

            if (codec->init_thread_copy)
                err = codec->init_thread_copy(copy);
        }

        if (err) goto error;
///下面出创建线程,调用frame_worker_thread
        err = AVERROR(pthread_create(&p->thread, NULL, frame_worker_thread, p));
        p->thread_init= !err;
        if(!p->thread_init)
            goto error;
    }

    return 0;

error:
    frame_thread_free(avctx, i+1);

    return err;
}

/在ff_thread_decode_frame中会调用submit_packet将码流交给对应的解码线程,来实现线程状态的改变,具体的流程图见下面图

static int submit_packet(PerThreadContext *p, AVPacket *avpkt)

{
    FrameThreadContext *fctx = p->parent;
    PerThreadContext *prev_thread = fctx->prev_thread;
    const AVCodec *codec = p->avctx->codec;

    if (!avpkt->size && !(codec->capabilities & CODEC_CAP_DELAY)) return 0;

    pthread_mutex_lock(&p->mutex);

    release_delayed_buffers(p);

    if (prev_thread) {
        int err;
        if (prev_thread->state == STATE_SETTING_UP) {
            pthread_mutex_lock(&prev_thread->progress_mutex);
            while (prev_thread->state == STATE_SETTING_UP)  //判断之前的线程状态为STATE_SETTING_UP的时候,进行线程的等待,
                pthread_cond_wait(&prev_thread->progress_cond, &prev_thread->progress_mutex);
            pthread_mutex_unlock(&prev_thread->progress_mutex);
        }

        err = update_context_from_thread(p->avctx, prev_thread->avctx, 0);//如果之前的线程状态变为STATE_SETUP_FINISHED,然后从线程中哦功能更新内容
        if (err) {
            pthread_mutex_unlock(&p->mutex);
            return err;
        }
    }

    av_buffer_unref(&p->avpkt.buf);
    p->avpkt = *avpkt;
    if (avpkt->buf)
        p->avpkt.buf = av_buffer_ref(avpkt->buf);
    else {
        av_fast_malloc(&p->buf, &p->allocated_buf_size, avpkt->size + FF_INPUT_BUFFER_PADDING_SIZE);
        p->avpkt.data = p->buf;
        memcpy(p->buf, avpkt->data, avpkt->size);
        memset(p->buf + avpkt->size, 0, FF_INPUT_BUFFER_PADDING_SIZE);
    }

    p->state = STATE_SETTING_UP;
    pthread_cond_signal(&p->input_cond);//发送码流输入准备好的信号给解码线程pthread_cond_waite(&p->input_cond),让解码线程
    pthread_mutex_unlock(&p->mutex);

    /*
     * If the client doesn't have a thread-safe get_buffer(),
     * then decoding threads call back to the main thread,
     * and it calls back to the client here.
     */

    if (!p->avctx->thread_safe_callbacks && (
#if FF_API_GET_BUFFER
         p->avctx->get_buffer ||
#endif
         p->avctx->get_buffer2 != avcodec_default_get_buffer2)) {
        while (p->state != STATE_SETUP_FINISHED && p->state != STATE_INPUT_READY) {
            pthread_mutex_lock(&p->progress_mutex);
            while (p->state == STATE_SETTING_UP)
                pthread_cond_wait(&p->progress_cond, &p->progress_mutex);

            if (p->state == STATE_GET_BUFFER) {
                p->result = ff_get_buffer(p->avctx, p->requested_frame, p->requested_flags);
                p->state  = STATE_SETTING_UP;
                pthread_cond_signal(&p->progress_cond);
            }
            pthread_mutex_unlock(&p->progress_mutex);
        }
    }

    fctx->prev_thread = p;
    fctx->next_decoding++;

    return 0;

}

主线程接收数据,准备完毕后发出pthread_cond_signal(&p->input_cond)信号,然后解码线程接收到以后,设置线程状态为finish_setup,发送广播p->process_cond,进行解码,

解码线程解完以后,发送p->progress_cond信号,主线程回调,从解码线程中获取get_buffer,解码之后的数据,然后发送p->progress_cond信号给解码线程,然后解码线程发出p->output_cond信号给主线程,主线程输出解码之后的frame给用户

 

注释1:

FFMPEG解码多线程

 

 

上一篇:logstash简单收集mysql慢日志-5


下一篇:ffmpeg参数编码大全