如何用Python编写立体声wav文件?

以下代码将频率为400Hz的简单正弦写入单声道WAV文件.如何更改此代码以生成立体声WAV文件.第二个频道应该是不同的频率.

import math
import wave
import struct

freq = 440.0
data_size = 40000
fname = "WaveTest.wav"
frate = 11025.0  # framerate as a float
amp = 64000.0     # multiplier for amplitude

sine_list_x = []
for x in range(data_size):
    sine_list_x.append(math.sin(2*math.pi*freq*(x/frate)))

wav_file = wave.open(fname, "w")

nchannels = 1
sampwidth = 2
framerate = int(frate)
nframes = data_size
comptype = "NONE"
compname = "not compressed"

wav_file.setparams((nchannels, sampwidth, framerate, nframes,
    comptype, compname))

for s in sine_list_x:
    # write the audio frames to file
    wav_file.writeframes(struct.pack('h', int(s*amp/2)))

wav_file.close()

解决方法:

使用另一个频率/通道构建并行sine_list_y列表,设置nchannels = 2,并在输出循环中使用zip(sine_list_x,sine_list_y)中的s,t:作为header子句,以及具有两个writeframes调用的主体 – 一个对于s,一对一. IOW,两个通道的相应帧在文件中“交替”.

参见例如this页面对所有可能的WAV文件格式进行了详尽的描述,我引述:

Multi-channel digital audio samples
are stored as interlaced wave data
which simply means that the audio
samples of a multi-channel (such as
stereo and surround) wave file are
stored by cycling through the audio
samples for each channel before
advancing to the next sample time.
This is done so that the audio files
can be played or streamed before the
entire file can be read. This is handy
when playing a large file from disk
(that may not completely fit into
memory) or streaming a file over the
Internet. The values in the diagram
below would be stored in a Wave file
in the order they are listed in the
Value column (top to bottom).

并且下表清楚地显示了通道的样本左,右,左,右……

上一篇:java – 在Android上合并两个WAVE文件(连接)


下一篇:232Echarts - 3D 柱状图(Noise modified from marpi's demo)