FFmpeg笔记(四)

FFmpeg编解码流程

  • 下面是一个文件格式的基本转换流程,

a.libavformat.a提供demuxermuxer,它是音视频文件容器的拆包解包工厂,encoded data packets对应的是AVPacket(音视频数据包).
b. libavcodec.a提供decoderencoder,对上游传递过的来的AVPacket进行编解码,的到AVFrame(视频帧,或音频帧,多个音频帧),音视频的压缩和特效实现就是针对AVFrame进行处理的。
c. 处理后的音视频帧可以再次封装成文件,或是继续转换生成pixel buff提交给渲染器展示输出.

 _______              ______________
|       |            |              |
| input |  demuxer   | encoded data |   decoder
| file  | ---------> | packets      | -----+
|_______|            |______________|      |
                                           v
                                       _________
                                      |         |
                                      | decoded |
                                      | frames  |
                                      |_________|
 ________             ______________       |
|        |           |              |      |
| output | <-------- | encoded data | <----+
| file   |   muxer   | packets      |   encoder
|________|           |______________|

  • 基于上面的转换流程,在decode frames后进行一些特效处理,这一过程主要有libavfilter来实现,通常用来设置视频的滤镜效果
 _________                        ______________
|         |                      |              |
| decoded |                      | encoded data |
| frames  |\                   _ | packets      |
|_________| \                  /||______________|
             \   __________   /
  simple     _\||          | /  encoder
  filtergraph   | filtered |/
                | frames   |
                |__________|
  • 输入输出不仅仅局限于文件,还可以接收和输出流媒体,可以通过ffmpeg -protocls查看音视频支持的流媒体转换,这里累列举常用的流媒体协议,每一种一些都有一特定的首部字段来定义如何实现流媒体的互转操作. 所谓的推流拉流无非就是按照对应的协议规范,将音视频流按照协议封装后输出和输出.
  file
  http
  https 
  pipe 
  rtmp
  rtmps 
  rtp 
  tcp
  tls
  udp 
  ...

FFmpeg-本地调试

如果前面已经安装了ffmpeg这里就非常快了,如果是mac电脑,可以使用xcode来debug:
先说下集成步骤,如果对Xcode不属性可以参考这篇博客https://www.jianshu.com/p/226c19aa6e42,图文并茂的描述了集成步骤

a. 下载ffmpeg并成功安装
b. 新建一个C工程,将FFmpeg源文件全部拖入到工程,注意: create exteneral build system project不要勾选
c. 然后设置Header Search Path和Library Search Path,和之前集成FFmpeg到iOS工程的步相似,header选择我们可以统一将所有header引入
d. 然后创建一个external build system的target
f. 选择executable,点击other找到你需要debug的ffmpeg.g的debug文件.
g. 设置启动的运行参数,传递给ffmpeg.c文件的main函数.

ffmpeg.c文件中加上断点,点击运行,这里可以看到调试器的基本信息

ffmpeg_g was compiled with optimization - stepping may behave oddly; variables may not be available.
(lldb) p outdev_list
(const AVOutputFormat *const [2]) $0 = {
  [0] = 0x00000001012b90d0
  [1] = 0x0000000000000000
} 
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/Users/qxq4633/ffmpegdemo/input.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    creation_time   : 2021-02-05T02:51:34.000000Z
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: MacBookPro15,1
    com.apple.quicktime.software: Mac OS X 10.15.6 (19G2021)
    com.apple.quicktime.creationdate: 2021-02-05T10:50:55+0800
  Duration: 00:00:26.63, start: 0.000000, bitrate: 2028 kb/s
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 766x1334 [SAR 1:1 DAR 383:667], 2015 kb/s, 59.93 fps, 60 tbr, 6k tbn, 12k tbc (default)
    Metadata:
      creation_time   : 2021-02-05T02:51:34.000000Z
      handler_name    : Core Media Video
      encoder         : H.264
At least one output file must be specified
Program ended with exit code: 1

FFmpeg main函数相关的主要步骤

  1. 注册程序退出时的cleanup函数
  2. 注册复用器
  3. 解析传入的参数
  4. 设置全局变量
static uint8_t *subtitle_out;
InputStream **input_streams = NULL;
int        nb_input_streams = 0;
InputFile   **input_files   = NULL;
int        nb_input_files   = 0;
OutputStream **output_streams = NULL;
int         nb_output_streams = 0;
OutputFile   **output_files   = NULL;
int         nb_output_files   = 0;
FilterGraph **filtergraphs;
int        nb_filtergraphs;
  1. 打开输入文件,获取文件信息,并通过指定复用进行解包,将解包的AVPacket传入到InputStream中
  2. InputStream通过它关联的AVFomartContext和AVFilterGraph(滤波器)获取便捷码信息
  3. 查找并创建对应的Codec进行编解码
  4. 通过指定的AVFilter为图像添加滤镜效果
  5. 最后通过复用器将文件进行包装

FFmpeg main函数-ffmpeg_cleanup

注册ffmpeg_cleanup函数,用于清除程序退出时的资源

输入输出的Files,Streams,Filters,用于批量处理Filter的FilterGraph,以及它们所关联的子属性

static void ffmpeg_cleanup(int ret)
{
    int i, j;

    if (do_benchmark) {
        int maxrss = getmaxrss() / 1024;
        av_log(NULL, AV_LOG_INFO, "bench: maxrss=%ikB\n", maxrss);
    }
    //1. 释放全局的`FilterGraph`想关联的属性,这些关联的对象按照指针的顺序连接起来就是一个不规则的树,
    //   按顺序一次从树的底部到根节点,最后全部释放
    for (i = 0; i < nb_filtergraphs; i++) {
        FilterGraph *fg = filtergraphs[i];
        avfilter_graph_free(&fg->graph);
        for (j = 0; j < fg->nb_inputs; j++) {
            //1.1 获取FilterGraph所关联的`InputFilter`列表
            InputFilter *ifilter = fg->inputs[j];
            //1.2 获取InputFilter关联的InputStream
            struct InputStream *ist = ifilter->ist;
            //1.3 释放InputFilter关联的FrameQueue(AVFifoBuffer)
            while (av_fifo_size(ifilter->frame_queue)) {
                AVFrame *frame;
                //1.3.1 依次释放FrameQueue(AVFifoBuffer)中的AVFrame
                av_fifo_generic_read(ifilter->frame_queue, &frame,
                                     sizeof(frame), NULL);
                av_frame_free(&frame);
            }
            av_fifo_freep(&ifilter->frame_queue);
            //1.2.1 释放InputStream所关联的AVSubtitle队列
            if (ist->sub2video.sub_queue) {
                while (av_fifo_size(ist->sub2video.sub_queue)) {
                    AVSubtitle sub;
                    av_fifo_generic_read(ist->sub2video.sub_queue,
                                         &sub, sizeof(sub), NULL);
                    avsubtitle_free(&sub);
                }
                av_fifo_freep(&ist->sub2video.sub_queue);
            }
            //1.4 释放GPU解码的Frame Context上下文引用
            av_buffer_unref(&ifilter->hw_frames_ctx);
            //1.5 释放input filter name
            av_freep(&ifilter->name); 
            //1.5 释放第j个InputFilter
            av_freep(&fg->inputs[j]);
        }
        //1.6 释放所有的InputFilters的引用
        av_freep(&fg->inputs);
    
        //2.1 获取所有的`OutputFilter`并释放所有属性
        for (j = 0; j < fg->nb_outputs; j++) {
            OutputFilter *ofilter = fg->outputs[j];
            ///2.1.1 out_tmp是一个AVFilterInOut链表,处理流映射之前的临时存储
            avfilter_inout_free(&ofilter->out_tmp);
            ///输出过滤器的额名称
            av_freep(&ofilter->name); 
            ///编码器的格式
            av_freep(&ofilter->formats);
            ///描述音频通道具体信息,一个映射表
            av_freep(&ofilter->channel_layouts);
            ///采样率
            av_freep(&ofilter->sample_rates); 
            av_freep(&fg->outputs[j]);
        }
        av_freep(&fg->outputs);
        av_freep(&fg->graph_desc);
        /// 释放 filtergraphs
        av_freep(&filtergraphs[i]);
    }
    av_freep(&filtergraphs);
    ///释放添加的字幕AVSubtitle
    av_freep(&subtitle_out);

    //3. 释放输出文件
    for (i = 0; i < nb_output_files; i++) {
        OutputFile *of = output_files[i];
        AVFormatContext *s;
        if (!of)
            continue;
        s = of->ctx;
        // 如果输入的是文件则需要释放文件`AVIOContext`上下文指针pb
        if (s && s->oformat && !(s->oformat->flags & AVFMT_NOFILE))
            avio_closep(&s->pb);
        // 释放传入的`AVDictionary`,指针文件处理的一些属性,如是否拷贝key,value
        avformat_free_context(s);
        av_dict_free(&of->opts);

        av_freep(&output_files[i]);
    }

    //4. 释放 OutputStream
    for (i = 0; i < nb_output_streams; i++) {
        OutputStream *ost = output_streams[i];

        if (!ost)
            continue;
        //释放AVBSFContext
        av_bsf_free(&ost->bsf_ctx);
        //释放filtered AVFrame
        av_frame_free(&ost->filtered_frame);
        //last AVFrame 
        av_frame_free(&ost->last_frame);
        //AVDictionary
        av_dict_free(&ost->encoder_opts);
        //forced_keyframes
        av_freep(&ost->forced_keyframes);
        //AVExpr,计算工具类
        av_expr_free(ost->forced_keyframes_pexpr);
        //filters
        av_freep(&ost->avfilter);
        av_freep(&ost->logfile_prefix);
        //channel信息映射表
        av_freep(&ost->audio_channels_map);
        ost->audio_channels_mapped = 0;
        //采样信息的选择项
        av_dict_free(&ost->sws_dict);
        av_dict_free(&ost->swr_opts);
        
        //编解码器上下文容器,在容器中提供了编解码器相关的输入参数,属性,原数据,如codec_type,codecid
        avcodec_free_context(&ost->enc_ctx);
        //与编码器选项关联的输入编解码器参数 AVCodecParameters
        avcodec_parameters_free(&ost->ref_par);

        if (ost->muxing_queue) {
            while (av_fifo_size(ost->muxing_queue)) {
                AVPacket pkt;
                av_fifo_generic_read(ost->muxing_queue, &pkt, sizeof(pkt), NULL);
                av_packet_unref(&pkt);
            }
            av_fifo_freep(&ost->muxing_queue);
        }

        av_freep(&output_streams[i]);
    }
#if HAVE_THREADS
    free_input_threads();
#endif  
    //释放全局输入文件指针
    for (i = 0; i < nb_input_files; i++) {
        avformat_close_input(&input_files[i]->ctx);
        av_freep(&input_files[i]);
    }
    //释放全局输入流指针
    for (i = 0; i < nb_input_streams; i++) {
        InputStream *ist = input_streams[i];

        av_frame_free(&ist->decoded_frame);
        av_frame_free(&ist->filter_frame);
        av_dict_free(&ist->decoder_opts);
        avsubtitle_free(&ist->prev_sub.subtitle);
        av_frame_free(&ist->sub2video.frame);
        av_freep(&ist->filters);
        av_freep(&ist->hwaccel_device);
        av_freep(&ist->dts_buffer);

        avcodec_free_context(&ist->dec_ctx);

        av_freep(&input_streams[i]);
    }

    if (vstats_file) {
        if (fclose(vstats_file))
            av_log(NULL, AV_LOG_ERROR,
                   "Error closing vstats file, loss of information possible: %s\n",
                   av_err2str(AVERROR(errno)));
    }
    av_freep(&vstats_filename);

    av_freep(&input_streams);
    av_freep(&input_files);
    av_freep(&output_streams);
    av_freep(&output_files);

    uninit_opts();
    //这里是根据`HAVE_WINSOCK2_H`,默认没有开启
    avformat_network_deinit();

    if (received_sigterm) {
        av_log(NULL, AV_LOG_INFO, "Exiting normally, received signal %d.\n",
               (int) received_sigterm);
    } else if (ret && atomic_load(&transcode_init_done)) {
        av_log(NULL, AV_LOG_INFO, "Conversion failed!\n");
    }
    term_exit();
    ffmpeg_exited = 1;
}

FFmpeg main函数-avdevice_register_all()

avpriv_register_devices(outdev_list, indev_list);注册成了输出和输出复用器(device)

static const AVOutputFormat * const outdev_list[] = {
    &ff_audiotoolbox_muxer,
    NULL };

static const AVInputFormat * const indev_list[] = {
    &ff_avfoundation_demuxer,
    &ff_lavfi_demuxer,
    NULL };

不同的平台可能会有不同的输入输出复用器(设备),在<libavdevice/alldevices.c>中可以找到这些默认的复用器

/* devices */ 
extern AVInputFormat  ff_android_camera_demuxer;
extern AVOutputFormat ff_audiotoolbox_muxer;
extern AVInputFormat  ff_avfoundation_demuxer; 
extern AVInputFormat  ff_lavfi_demuxer; 
extern AVOutputFormat ff_opengl_muxer; 
...

在libavdevice中我们可以找到这个复用器的详细介绍,这里定义了ff_audiotoolbox_muxer,它的具体实现是由AudioToolbox框架来实现的

AVOutputFormat ff_audiotoolbox_muxer = {
    .name           = "audiotoolbox",
    .long_name      = NULL_IF_CONFIG_SMALL("AudioToolbox output device"),
    .priv_data_size = sizeof(ATContext),
    .audio_codec    = AV_NE(AV_CODEC_ID_PCM_S16BE, AV_CODEC_ID_PCM_S16LE),
    .video_codec    = AV_CODEC_ID_NONE,
    .write_header   = at_write_header,
    .write_packet   = at_write_packet,
    .write_trailer  = at_write_trailer,
    .flags          = AVFMT_NOFILE,
    .priv_class     = &at_class,
};

  • 将输入输出的muxer_list通过链表链接起来
static void av_format_init_next(void)
{
    AVOutputFormat *prevout = NULL, *out;
    AVInputFormat *previn = NULL, *in;
  
    ff_mutex_lock(&avpriv_register_devices_mutex);
    //muxer_list为全局静态变量,保存了ffmpeg的所有默认的复用器
    for (int i = 0; (out = (AVOutputFormat*)muxer_list[i]); i++) {
        if (prevout)
            prevout->next = out;
        prevout = out;
    }
    //输出设备列表
    if (outdev_list) {
        for (int i = 0; (out = (AVOutputFormat*)outdev_list[i]); i++) {
            if (prevout)
                prevout->next = out;
            prevout = out;
        }
    }
    //同上
    for (int i = 0; (in = (AVInputFormat*)demuxer_list[i]); i++) {
        if (previn)
            previn->next = in;
        previn = in;
    }
    //输入设备列表,由于是本地是通过xcode直接运行输入的
    if (indev_list) {
        for (int i = 0; (in = (AVInputFormat*)indev_list[i]); i++) {
            if (previn)
                previn->next = in;
            previn = in;
        }
    }

    ff_mutex_unlock(&avpriv_register_devices_mutex);
}
  • AVInputFormatAVOuputFormat通过链表进行链接

FFmpeg main函数-show_banner()

  • 根据传入的参数打印一些版权,版本信息,build环境信息等
void show_banner(int argc, char **argv, const OptionDef *options)
{
    int idx = locate_option(argc, argv, options, "version");
    if (hide_banner || idx)
        return;

    print_program_info (INDENT|SHOW_COPYRIGHT, AV_LOG_INFO);
    print_all_libs_info(INDENT|SHOW_CONFIG,  AV_LOG_INFO);
    print_all_libs_info(INDENT|SHOW_VERSION, AV_LOG_INFO);
}

FFmpeg main函数-ffmpeg_parse_options()

  • 对传入的参数进行分组表示,前面提到了ffmpeg的命令基本用法,和group对象的结构完全对应
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
  • OptionGroup定义如下:
typedef struct OptionGroup {
    const OptionGroupDef *group_def;
    const char *arg;

    Option *opts; //全局类型的参数
    int  nb_opts;

    AVDictionary *codec_opts;       //编解码器相关的选项参数
    AVDictionary *format_opts;      //解复用和复用相关的参数
    AVDictionary *resample_opts;    //重采样相关的参数
    AVDictionary *sws_dict;         //swscale中执行的参数
    AVDictionary *swr_opts;         //swresample options
} OptionGroup;
int ffmpeg_parse_options(int argc, char **argv)
{

    /*
    typedef struct OptionParseContext {
        OptionGroup global_opts;
        OptionGroupList *groups;
        int           nb_groups; 
        OptionGroup cur_group;
    } OptionParseContext;
    */
    //对传入的参数进行分组
    OptionParseContext octx;
    uint8_t error[128];
    int ret;

    memset(&octx, 0, sizeof(octx));

    //参数分组
    ret = split_commandline(&octx, argc, argv, options, groups,
                            FF_ARRAY_ELEMS(groups));
    if (ret < 0) {
        av_log(NULL, AV_LOG_FATAL, "Error splitting the argument list: ");
        goto fail;
    }

    //global参数应用
    ret = parse_optgroup(NULL, &octx.global_opts);
    if (ret < 0) {
        av_log(NULL, AV_LOG_FATAL, "Error parsing global options: ");
        goto fail;
    }

    /* configure terminal and setup signal handlers */
    term_init();

    /* open input files */
    ret = open_files(&octx.groups[GROUP_INFILE], "input", open_input_file);
    if (ret < 0) {
        av_log(NULL, AV_LOG_FATAL, "Error opening input files: ");
        goto fail;
    }

    //创建filtergraphs,滤镜特效相关
    ret = init_complex_filters();
    if (ret < 0) {
        av_log(NULL, AV_LOG_FATAL, "Error initializing complex filters.\n");
        goto fail;
    }

    /* open output files */
    ret = open_files(&octx.groups[GROUP_OUTFILE], "output", open_output_file);
    if (ret < 0) {
        av_log(NULL, AV_LOG_FATAL, "Error opening output files: ");
        goto fail;
    }

    check_filter_outputs();

fail:
    uninit_parse_context(&octx);
    if (ret < 0) {
        av_strerror(ret, error, sizeof(error));
        av_log(NULL, AV_LOG_FATAL, "%s\n", error);
    }
    return ret;
}

FFmpeg main函数-transcode()

当输入输出参数解析完毕,并且设置好输入输出文件之后执行这个方法,来处理输入文件

static int transcode(void)
{  
    //1.定义输入输出文件的本地变量,用于从ffmpeg的全局环境变量中读取参数
    int ret, i;
    AVFormatContext *os;
    OutputStream *ost;
    InputStream *ist;
    int64_t timer_start;
    int64_t total_packets_written = 0;
    
    //2. 创建滤波器,初始化输入输出流,encoder,写入文件头部信息等..见下文
    ret = transcode_init();
    if (ret < 0)
        goto fail;

    //3. 处理终端交互命令,如yes/no选项
    while (!received_sigterm) 
    ...
     
    //4. 将解包的数据传入到`InputStream`中,并根据配置参数选项对每一帧AVFrame进编解码操作,见下文
    for (i = 0; i < nb_input_streams; i++) {
        ist = input_streams[i];
        if (!input_files[ist->file_index]->eof_reached) {
            process_input_packet(ist, NULL, 0);
        }
    }

    flush_encoders();

    term_exit();

    /* write the trailer if needed and close file */
    for (i = 0; i < nb_output_files; i++) {
        os = output_files[i]->ctx;
        if (!output_files[i]->header_written) {
            av_log(NULL, AV_LOG_ERROR,
                   "Nothing was written into output file %d (%s), because "
                   "at least one of its streams received no packets.\n",
                   i, os->url);
            continue;
        }
        if ((ret = av_write_trailer(os)) < 0) {
            av_log(NULL, AV_LOG_ERROR, "Error writing trailer of %s: %s\n", os->url, av_err2str(ret));
            if (exit_on_error)
                exit_program(1);
        }
    }

    /* dump report by using the first video and audio streams */
    print_report(1, timer_start, av_gettime_relative());

    /* close each encoder */
    for (i = 0; i < nb_output_streams; i++) {
        ost = output_streams[i];
        if (ost->encoding_needed) {
            av_freep(&ost->enc_ctx->stats_in);
        }
        total_packets_written += ost->packets_written;
        if (!ost->packets_written && (abort_on_flags & ABORT_ON_FLAG_EMPTY_OUTPUT_STREAM)) {
            av_log(NULL, AV_LOG_FATAL, "Empty output on stream %d.\n", i);
            exit_program(1);
        }
    }

    if (!total_packets_written && (abort_on_flags & ABORT_ON_FLAG_EMPTY_OUTPUT)) {
        av_log(NULL, AV_LOG_FATAL, "Empty output\n");
        exit_program(1);
    }

    for (i = 0; i < nb_input_streams; i++) {
        ist = input_streams[i];
        if (ist->decoding_needed) {
            avcodec_close(ist->dec_ctx);
            if (ist->hwaccel_uninit)
                ist->hwaccel_uninit(ist->dec_ctx);
        }
    }

    hw_device_free_all();

    /* finished ! */
    ret = 0;

 fail:
 ...
    if (output_streams) {
        for (i = 0; i < nb_output_streams; i++) {
            ost = output_streams[i];
            if (ost) {
                if (ost->logfile) {
                    if (fclose(ost->logfile))
                        av_log(NULL, AV_LOG_ERROR,
                               "Error closing logfile, loss of information possible: %s\n",
                               av_err2str(AVERROR(errno)));
                    ost->logfile = NULL;
                }
                av_freep(&ost->forced_kf_pts);
                av_freep(&ost->apad);
                av_freep(&ost->disposition);
                av_dict_free(&ost->encoder_opts);
                av_dict_free(&ost->sws_dict);
                av_dict_free(&ost->swr_opts);
                av_dict_free(&ost->resample_opts);
            }
        }
    }
    return ret;
}

transcode_init()


static int transcode_init(void)
{   
    //1.转码环境变量初始化
    int ret = 0, i, j, k;
    //转码容器上下文初始化
    AVFormatContext *oc; 
    //用于转换文件到流映射
    OutputStream *ost;
    InputStream *ist;
    char error[1024] = {0};
    
    //2.创建滤波器如果存可用的滤波器,FFmpeg的main函数中注册默认的滤波器,只有在打开输出文件的时候创建一个空滤波器。 此处只有一个滤波器,ffmpeg_parse_options-》open_files-》open_output_file-》init_simple_filtergraph,用来保存filter的上下文参数.
    for (i = 0; i < nb_filtergraphs; i++) {
        FilterGraph *fg = filtergraphs[i];
        for (j = 0; j < fg->nb_outputs; j++) {
            OutputFilter *ofilter = fg->outputs[j];
            if (!ofilter->ost || ofilter->ost->source_index >= 0)
                continue;
            if (fg->nb_inputs != 1)
                continue;
            for (k = nb_input_streams-1; k >= 0 ; k--)
                if (fg->inputs[0]->ist == input_streams[k])
                    break;
            ofilter->ost->source_index = k;
        }
    }

    //3.初始化帧速率仿真
    for (i = 0; i < nb_input_files; i++) {
        InputFile *ifile = input_files[i];
        if (ifile->rate_emu)
            for (j = 0; j < ifile->nb_streams; j++)
                input_streams[j + ifile->ist_index]->start = av_gettime_relative();
    }

    //4.初始化输入流
    for (i = 0; i < nb_input_streams; i++)
        if ((ret = init_input_stream(i, error, sizeof(error))) < 0) {
            for (i = 0; i < nb_output_streams; i++) {
                ost = output_streams[i];
                avcodec_close(ost->enc_ctx);
            }
            goto dump_format;
        }
    //5.打开输出流的编码器
    /* open each encoder */
    for (i = 0; i < nb_output_streams; i++) {
        // skip streams fed from filtergraphs until we have a frame for them
        if (output_streams[i]->filter)
            continue;

        ret = init_output_stream(output_streams[i], error, sizeof(error));
        if (ret < 0)
            goto dump_format;
    }

    /* discard unused programs */
    for (i = 0; i < nb_input_files; i++) {
        InputFile *ifile = input_files[i];
        for (j = 0; j < ifile->ctx->nb_programs; j++) {
            AVProgram *p = ifile->ctx->programs[j];
            int discard  = AVDISCARD_ALL;

            for (k = 0; k < p->nb_stream_indexes; k++)
                if (!input_streams[ifile->ist_index + p->stream_index[k]]->discard) {
                    discard = AVDISCARD_DEFAULT;
                    break;
                }
            p->discard = discard;
        }
    }
    //6. 将header信息写入文件,
#        - 当没有stream时直接执行check_init_output_file
#        - 然后调用了 avformat_write_header(of->ctx, &of->opts); of: OuputFile
#        - 然后执行 int (*write_header)(struct AVFormatContext *);
#        - 然后执行对应平台当前指定的复用器,`write_header`,将header信息写入文件
#     AVOutputFormat ff_ilbc_muxer = {
#     .name         = "ilbc",
#     .long_name    = NULL_IF_CONFIG_SMALL("iLBC storage"),
#     .mime_type    = "audio/iLBC",
#     .extensions   = "lbc",
#     .audio_codec  = AV_CODEC_ID_ILBC,
#     .write_header = ilbc_write_header,
#     .write_packet = ff_raw_write_packet,
#     .flags        = AVFMT_NOTIMESTAMPS,
# };
    /* write headers for files with no streams */
    for (i = 0; i < nb_output_files; i++) {
        oc = output_files[i]->ctx;
        if (oc->oformat->flags & AVFMT_NOSTREAMS && oc->nb_streams == 0) {
            ret = check_init_output_file(output_files[i], i);
            if (ret < 0)
                goto dump_format;
        }
    }

    //7.查看stream所关联的滤波器的信息
    av_log(NULL, AV_LOG_INFO, "Stream mapping:\n");
    for (i = 0; i < nb_input_streams; i++) {
        ist = input_streams[i];

        for (j = 0; j < ist->nb_filters; j++) {
            //查看stream所关联的滤波器的信息
            if (!filtergraph_is_simple(ist->filters[j]->graph)) {
                av_log(NULL, AV_LOG_INFO, "  Stream #%d:%d (%s) -> %s",
                       ist->file_index, ist->st->index, ist->dec ? ist->dec->name : "?",
                       ist->filters[j]->name);
                if (nb_filtergraphs > 1)
                    av_log(NULL, AV_LOG_INFO, " (graph %d)", ist->filters[j]->graph->index);
                av_log(NULL, AV_LOG_INFO, "\n");
            }
        }
    }
    //8. 初始化本地变量outputStream,打印输出
    for (i = 0; i < nb_output_streams; i++) {
        ost = output_streams[i];

        if (ost->attachment_filename) {
            /* an attached file */
            av_log(NULL, AV_LOG_INFO, "  File %s -> Stream #%d:%d\n",
                   ost->attachment_filename, ost->file_index, ost->index);
            continue;
        }

        if (ost->filter && !filtergraph_is_simple(ost->filter->graph)) {
            /* output from a complex graph */
            av_log(NULL, AV_LOG_INFO, "  %s", ost->filter->name);
            if (nb_filtergraphs > 1)
                av_log(NULL, AV_LOG_INFO, " (graph %d)", ost->filter->graph->index);

            av_log(NULL, AV_LOG_INFO, " -> Stream #%d:%d (%s)\n", ost->file_index,
                   ost->index, ost->enc ? ost->enc->name : "?");
            continue;
        }

        av_log(NULL, AV_LOG_INFO, "  Stream #%d:%d -> #%d:%d",
               input_streams[ost->source_index]->file_index,
               input_streams[ost->source_index]->st->index,
               ost->file_index,
               ost->index);
        if (ost->sync_ist != input_streams[ost->source_index])
            av_log(NULL, AV_LOG_INFO, " [sync #%d:%d]",
                   ost->sync_ist->file_index,
                   ost->sync_ist->st->index);
        if (ost->stream_copy)
            av_log(NULL, AV_LOG_INFO, " (copy)");
        else {
            const AVCodec *in_codec    = input_streams[ost->source_index]->dec;
            const AVCodec *out_codec   = ost->enc;
            const char *decoder_name   = "?";
            const char *in_codec_name  = "?";
            const char *encoder_name   = "?";
            const char *out_codec_name = "?";
            const AVCodecDescriptor *desc;

            if (in_codec) {
                decoder_name  = in_codec->name;
                desc = avcodec_descriptor_get(in_codec->id);
                if (desc)
                    in_codec_name = desc->name;
                if (!strcmp(decoder_name, in_codec_name))
                    decoder_name = "native";
            }

            if (out_codec) {
                encoder_name   = out_codec->name;
                desc = avcodec_descriptor_get(out_codec->id);
                if (desc)
                    out_codec_name = desc->name;
                if (!strcmp(encoder_name, out_codec_name))
                    encoder_name = "native";
            }

            av_log(NULL, AV_LOG_INFO, " (%s (%s) -> %s (%s))",
                   in_codec_name, decoder_name,
                   out_codec_name, encoder_name);
        }
        av_log(NULL, AV_LOG_INFO, "\n");
    }

    if (ret) {
        av_log(NULL, AV_LOG_ERROR, "%s\n", error);
        return ret;
    }

    atomic_store(&transcode_init_done, 1);

    return 0;
}

process_input_packet

特别注意,在处理完解包传入的AVPacket之后需要清除缓冲区的数

static int process_input_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
{
    int ret = 0, i;
    int repeating = 0;
    int eof_reached = 0;
    ...
    //1. 解码
    while (ist->decoding_needed) {  
        ....
        switch (ist->dec_ctx->codec_type) {
        case AVMEDIA_TYPE_AUDIO:
            ret = decode_audio    (ist, repeating ? NULL : &avpkt, &got_output,
                                   &decode_failed);
            break;
        case AVMEDIA_TYPE_VIDEO:
            ret = decode_video    (ist, repeating ? NULL : &avpkt, &got_output, &duration_pts, !pkt,
                                   &decode_failed);
            if (!repeating || !pkt || got_output) {
                if (pkt && pkt->duration) {
                    duration_dts = av_rescale_q(pkt->duration, ist->st->time_base, AV_TIME_BASE_Q);
                } else if(ist->dec_ctx->framerate.num != 0 && ist->dec_ctx->framerate.den != 0) {
                    int ticks= av_stream_get_parser(ist->st) ? av_stream_get_parser(ist->st)->repeat_pict+1 : ist->dec_ctx->ticks_per_frame;
                    duration_dts = ((int64_t)AV_TIME_BASE *
                                    ist->dec_ctx->framerate.den * ticks) /
                                    ist->dec_ctx->framerate.num / ist->dec_ctx->ticks_per_frame;
                }
                if(ist->dts != AV_NOPTS_VALUE && duration_dts) {
                    ist->next_dts += duration_dts;
                }else
                    ist->next_dts = AV_NOPTS_VALUE;
            }
            if (got_output) {
                if (duration_pts > 0) {
                    ist->next_pts += av_rescale_q(duration_pts, ist->st->time_base, AV_TIME_BASE_Q);
                } else {
                    ist->next_pts += duration_dts;
                }
            }
            break;
        case AVMEDIA_TYPE_SUBTITLE:
            if (repeating)
                break;
            ret = transcode_subtitles(ist, &avpkt, &got_output, &decode_failed);
            if (!pkt && ret >= 0)
                ret = AVERROR_EOF;
            break;
        default:
            return -1;
        }
    ...
    //2. 直接拷贝stream
    for (i = 0; i < nb_output_streams; i++) {
        OutputStream *ost = output_streams[i];

        if (!check_output_constraints(ist, ost) || ost->encoding_needed)
            continue;

        do_streamcopy(ist, ost, pkt);
    }

    return !eof_reached;
}

FFmpeg-example本地调试

exmaple文件在ffmpeg仓库doc/文件夹下的examples,目录如下

avio_reading.c         //读取音视频文件信息
decode_audio.c         //音频解码
decode_video.c         //视频解码
demuxing_decoding.c    //解复用器处理音视频
encode_audio.c         //音频编码
encode_video.c         //视频编码
ffhash.c               //hash校验
filter_audio.c         //音频处理
filtering_audio.c      //音频处理
filtering_video.c      //用于解码和滤波的API示例
http_multiclient.c     //供一个文件,而不需要通过http对其进行解码或解组,流媒体传输
hw_decode.c            //展示了如何用输出进行硬件加速解码
metadata.c             //演示如何在应用程序中使用元数据API。
muxing.c               //以任何支持的libavformat格式输出媒体文件。默认值使用编解码器。
qsvdec.c               //演示如何使用输出进行QSV加速H.264解码GPU视频表面中的帧。
remuxing.c             //解复用和合复用相关的例子
resampling_audio.c     //音频重采样
scaling_video.c        //视频画面比例缩放相关的示例子
transcode_aac.c        //使用FFmpeg将输入音频文件转换为MP4容器中的AAC。
transcoding.c          //解组、解码、滤波、编码和muxing示例
vaapi_encode.c         //视频加速API(视频编码)编码示例
vaapi_transcode.c      //视频加速API(视频转码)转码示例

可以通过每个文件的main函数以及对传入args的断言介绍,去测试这些examples
按照doc/examples/README的介绍安装pkg-config(头文件自动搜索工具),然后在FFmpeg源代码的根目录执行make命令,就会自动编译好这些example工程,

... examples % tree -L 1 
...
├── avio_list_dir
├── avio_list_dir.c
├── avio_list_dir.d
├── avio_list_dir.o
├── avio_list_dir.g

这里列出了vido中的文件属性,也可以在去修改av_log函数中的内容,其中的.g文件相比avio_list_dir可以用于我们本地断点调试源文件

... examples % ./avio_list_dir video 
TYPE              SIZE                           NAME   UID(GID) UGO         MODIFIED         ACCESSED   STATUS_CHANGED
<FILE>         6752987                      input.mov    501(20) 644 1615644790000000 1615830067000000 1615823569000000

参照ffmpeg_g的调试步骤可以很方便的研究example中的实例

小结

FFmpeg源代码中简写单词特别多,很多都没有注释,需要结合上下文语句和音视频编解码来理解

参考连接

ffmpeg.org
ffmpeg-encoding-course

上一篇:Day4练习


下一篇:ffmpeg结构体(14)-之AVOption与AVClass上