管理音频焦点
情景:当你的app隐退到后台,而其他也有播放能力的app浮现在前台,这个时候,你可能要暂停你原有app的播放功能,和解除监听Media Button,把控制权交给前台的APP。
这就需要监听音频的焦点。
在开始播放之前,请求焦点,使用AudioManager的requestAudioFocus方法。
当你请求音频焦点,你可以指定你要监听的流类型(比如STREAM_MUSIC)和指定你要占有焦点多久。
当然从编程的角度来看,app获取焦点,其它app失去焦点,你应该都需要有所反应。
示例:请求音频焦点
01 |
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); |
02 |
03 |
// Request audio focus for playback |
04 |
int result = am.requestAudioFocus(focusChangeListener,
|
05 |
// Use the music stream.
|
06 |
AudioManager.STREAM_MUSIC,
|
07 |
// Request permanent focus.
|
08 |
AudioManager.AUDIOFOCUS_GAIN);
|
09 |
10 |
if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
|
11 |
mediaPlayer.start();
|
12 |
} |
应对失去焦点的监听:
01 |
private OnAudioFocusChangeListener focusChangeListener =
|
02 |
new
OnAudioFocusChangeListener() {
|
03 |
04 |
public
void onAudioFocusChange( int
focusChange) {
|
05 |
AudioManager am =
|
06 |
(AudioManager)getSystemService(Context.AUDIO_SERVICE);
|
07 |
08 |
switch
(focusChange) {
|
09 |
case
(AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK) :
|
10 |
// Lower the volume while ducking.
|
11 |
mediaPlayer.setVolume( 0 .2f, 0 .2f);
|
12 |
break ;
|
13 |
14 |
case
(AudioManager.AUDIOFOCUS_LOSS_TRANSIENT) :
|
15 |
pause();
|
16 |
break ;
|
17 |
18 |
case
(AudioManager.AUDIOFOCUS_LOSS) :
|
19 |
stop();
|
20 |
ComponentName component =
|
21 |
new
ComponentName(AudioPlayerActivity. this ,
|
22 |
MediaControlReceiver. class );
|
23 |
am.unregisterMediaButtonEventReceiver(component);
|
24 |
break ;
|
25 |
26 |
case
(AudioManager.AUDIOFOCUS_GAIN) :
|
27 |
// Return the volume to normal and resume if paused.
|
28 |
mediaPlayer.setVolume(1f, 1f);
|
29 |
mediaPlayer.start();
|
30 |
break ;
|
31 |
32 |
default : break ;
|
33 |
34 |
}
|
35 |
}
|
36 |
}; |
放弃音频焦点:
1 |
AudioManager am = |
2 |
(AudioManager)getSystemService(Context.AUDIO_SERVICE);
|
3 |
4 |
am.abandonAudioFocus(focusChangeListener); |
当你戴上耳机的时候,你可能需要降低音量或者先暂停播放,如何监听这种输出方式的改变呢?
答:
1 |
private class
NoisyAudioStreamReceiver extends
BroadcastReceiver {
|
2 |
@Override
|
3 |
public
void onReceive(Context context, Intent intent) {
|
4 |
if
(AudioManager.ACTION_AUDIO_BECOMING_NOISY.equals
|
5 |
(intent.getAction())) {
|
6 |
pause();
|
7 |
}
|
8 |
}
|
9 |
} |
录音
使用AudioRecord类去录音。创建一个AudioRecorder,指定资源,频率,通道配置,音频编码,和缓冲区大小。
1 |
int bufferSize = AudioRecord.getMinBufferSize(frequency,
|
2 |
channelConfiguration,
|
3 |
audioEncoding);
|
4 |
AudioRecord audioRecord = new
AudioRecord(MediaRecorder.AudioSource.MIC,
|
5 |
frequency, channelConfiguration,
|
6 |
audioEncoding, bufferSize);
|
频率、音频编码、和通道配置会影响录音的大小和质量。
出去私有的考虑,Android需要RECORD_AUDIO权限:
1 |
< uses-permission
android:name=”android.permission.RECORD_AUDIO”/>
|
当AudioRecorder对象被初始化,然后可以通过startRecording方法去开始异步录音,使用read方法将原始的音频数据放入录音缓冲区:
1 |
audioRecord.startRecording(); |
2 |
while (isRecording) {
|
3 |
[ ... populate the buffer ... ]
|
4 |
int
bufferReadResult = audioRecord.read(buffer, 0 , bufferSize);
|
5 |
} |
录下的原始音频数据后,拿什么播放呢?
答:使用AudioTrack去播放该类音频。
录音的例子:
01 |
int frequency = 11025 ;
|
02 |
int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
|
03 |
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
|
04 |
05 |
File file = |
06 |
new
File(Environment.getExternalStorageDirectory(), “raw.pcm”);
|
07 |
08 |
// Create the new file. |
09 |
try {
|
10 |
file.createNewFile();
|
11 |
} catch
(IOException e) {
|
12 |
Log.d(TAG, “IO Exception”, e);
|
13 |
} |
14 |
15 |
try {
|
16 |
OutputStream os = new
FileOutputStream(file);
|
17 |
BufferedOutputStream bos = new
BufferedOutputStream(os);
|
18 |
DataOutputStream dos = new
DataOutputStream(bos);
|
19 |
20 |
int
bufferSize = AudioRecord.getMinBufferSize(frequency,
|
21 |
channelConfiguration,
|
22 |
audioEncoding);
|
23 |
short [] buffer = new
short [bufferSize];
|
24 |
25 |
// Create a new AudioRecord object to record the audio.
|
26 |
AudioRecord audioRecord =
|
27 |
new
AudioRecord(MediaRecorder.AudioSource.MIC,
|
28 |
frequency,
|
29 |
channelConfiguration,
|
30 |
audioEncoding, bufferSize);
|
31 |
audioRecord.startRecording();
|
32 |
33 |
while
(isRecording) {
|
34 |
int
bufferReadResult = audioRecord.read(buffer, 0 , bufferSize);
|
35 |
for
( int i = 0 ; i < bufferReadResult; i++)
|
36 |
dos.writeShort(buffer[i]);
|
37 |
}
|
38 |
39 |
audioRecord.stop();
|
40 |
dos.close();
|
41 |
} catch
(Throwable t) {
|
42 |
Log.d(TAG, “An error occurred during recording”, t);
|
43 |
} |
AudioTrack播放声音
1 |
AudioTrack audioTrack = new
AudioTrack(AudioManager.STREAM_MUSIC,
|
2 |
frequency,
|
3 |
channelConfiguration,
|
4 |
audioEncoding,
|
5 |
audioLength,
|
6 |
AudioTrack.MODE_STREAM);
|
注意前面的参数要与你之前录音的参数一致。
1 |
audioTrack.play(); |
2 |
audioTrack.write(audio, 0 , audioLength);
|
write方法将原始的音频数据加入到播放缓冲区中。
创建Sound Pool
一般用来播放短促的声音,支持多音频同步播放。
直接看例子:
01 |
int maxStreams = 10 ;
|
02 |
SoundPool sp = new
SoundPool(maxStreams, AudioManager.STREAM_MUSIC, 0 );
|
03 |
04 |
int track1 = sp.load( this , R.raw.track1, 0 );
|
05 |
int track2 = sp.load( this , R.raw.track2, 0 );
|
06 |
int track3 = sp.load( this , R.raw.track3, 0 );
|
07 |
08 |
track1Button.setOnClickListener( new
OnClickListener() {
|
09 |
public
void onClick(View v) {
|
10 |
sp.play(track1, 1 , 1 , 0 , - 1 , 1 );
|
11 |
}
|
12 |
}); |
13 |
14 |
track2Button.setOnClickListener( new
OnClickListener() {
|
15 |
public
void onClick(View v) {
|
16 |
sp.play(track2, 1 , 1 , 0 , 0 , 1 );
|
17 |
}
|
18 |
}); |
19 |
20 |
track3Button.setOnClickListener( new
OnClickListener() {
|
21 |
public
void onClick(View v) {
|
22 |
sp.play(track3, 1 , 1 , 0 , 0 , 0 .5f);
|
23 |
}
|
24 |
}); |
25 |
26 |
stopButton.setOnClickListener( new
OnClickListener() {
|
27 |
public
void onClick(View v) {
|
28 |
sp.stop(track1);
|
29 |
sp.stop(track2);
|
30 |
sp.stop(track3);
|
31 |
}
|
32 |
}); |
33 |
34 |
chipmunkButton.setOnClickListener( new
OnClickListener() {
|
35 |
public
void onClick(View v) {
|
36 |
sp.setRate(track1, 2f);
|
37 |
}
|
38 |
}); |
Android2.2(Api Level 8)引入两个非常方便的方法,autoPause和autoResume,分别会暂停和运行状态,所有活跃的音频流。
若不再需要这些音频集合,就可以soundPool.release();去释放资源。
照相机拍照
使用Intents去拍照:
1 |
startActivityForResult( |
2 |
new
Intent(MediaStore.ACTION_IMAGE_CAPTURE), TAKE_PICTURE);
|
当然对应的onActivityResult,默认的返回的照片会以缩略图的形式。
如果想获取完整大小的图片,则需要先指定存储的目标文件,下面例子展示:
01 |
// Create an output file. |
02 |
File file = new
File(Environment.getExternalStorageDirectory(),
|
03 |
“test.jpg”);
|
04 |
Uri outputFileUri = Uri.fromFile(file); |
05 |
06 |
// Generate the Intent. |
07 |
Intent intent = new
Intent(MediaStore.ACTION_IMAGE_CAPTURE);
|
08 |
intent.putExtra(MediaStore.EXTRA_OUTPUT, outputFileUri); |
09 |
10 |
// Launch the camera app. |
11 |
startActivityForResult(intent, TAKE_PICTURE); |
注意:一旦你以这种方式启动后,就不会有缩略图返回了,所以所接收到得Intent将为null。
下面这个例子的onActivityResult对这两种情况做了处理:
01 |
@Override |
02 |
protected void
onActivityResult( int
requestCode,
|
03 |
int
resultCode, Intent data) {
|
04 |
if
(requestCode == TAKE_PICTURE) {
|
05 |
// Check if the result includes a thumbnail Bitmap
|
06 |
if
(data != null ) {
|
07 |
if
(data.hasExtra(“data”)) {
|
08 |
Bitmap thumbnail = data.getParcelableExtra(“data”);
|
09 |
imageView.setImageBitmap(thumbnail);
|
10 |
}
|
11 |
} else
{
|
12 |
// If there is no thumbnail image data, the image
|
13 |
// will have been stored in the target output URI.
|
14 |
15 |
// Resize the full image to fit in out image view.
|
16 |
int
width = imageView.getWidth();
|
17 |
int
height = imageView.getHeight();
|
18 |
19 |
BitmapFactory.Options factoryOptions = new
|
20 |
BitmapFactory.Options();
|
21 |
22 |
factoryOptions.inJustDecodeBounds = true ;
|
23 |
BitmapFactory.decodeFile(outputFileUri.getPath(),
|
24 |
factoryOptions);
|
25 |
26 |
int
imageWidth = factoryOptions.outWidth;
|
27 |
int
imageHeight = factoryOptions.outHeight;
|
28 |
29 |
// Determine how much to scale down the image
|
30 |
int
scaleFactor = Math.min(imageWidth/width,
|
31 |
imageHeight/height);
|
32 |
33 |
// Decode the image file into a Bitmap sized to fill the View
|
34 |
factoryOptions.inJustDecodeBounds = false ;
|
35 |
factoryOptions.inSampleSize = scaleFactor;
|
36 |
factoryOptions.inPurgeable = true ;
|
37 |
38 |
Bitmap bitmap =
|
39 |
BitmapFactory.decodeFile(outputFileUri.getPath(),
|
40 |
factoryOptions);
|
41 |
42 |
imageView.setImageBitmap(bitmap);
|
43 |
}
|
44 |
}
|
45 |
} |
直接控制照相机
首先这个少不了:
1 |
< uses-permission
android:name=”android.permission.CAMERA”/>
|
获取Camera通过:
Camera camera = Camera.open();
当你使用完了,记得释放资源哦:
camera.release();
照相机的属性
1 |
Camera.Parameters parameters = camera.getParameters(); |
通过此,你可以找到很多关于照相机的属性,有些参数是基于平台版本的。
你可以获得焦点的长度,还有相对水平和垂直的角度,分别通过getFocalLength和get[Horizontal/Vertical]ViewAngle。
Android 2.3(Api Level
9)引入getFocusDistance方法,你可以用来估计镜头和对象之间的距离,此方法会注入一个浮点数组,包含近、远、最优焦点距离;
01 |
float [] focusDistances = new
float [ 3 ];
|
02 |
03 |
parameters.getFocusDistances(focusDistances); |
04 |
05 |
float near =
|
06 |
focusDistances[Camera.Parameters.FOCUS_DISTANCE_NEAR_INDEX];
|
07 |
float far =
|
08 |
focusDistances[Camera.Parameters.FOCUS_DISTANCE_FAR_INDEX];
|
09 |
float optimal =
|
10 |
focusDistances[Camera.Parameters.FOCUS_DISTANCE_OPTIMAL_INDEX];
|
照相机设置和图像参数
设置参数的方法,类似于set*,从而修改Parameter对象,修改完之后:
camera.setParameters(parameters);
具体参数细节就不介绍了。
使用照相机预览
同样SurfaceView又派上用场了。
看段框架代码:
01 |
public class
CameraActivity extends
Activity implements
|
02 |
SurfaceHolder.Callback {
|
03 |
04 |
private
static final String TAG = “CameraActivity”;
|
05 |
06 |
private
Camera camera;
|
07 |
08 |
@Override
|
09 |
public
void onCreate(Bundle savedInstanceState) {
|
10 |
super .onCreate(savedInstanceState);
|
11 |
setContentView(R.layout.main);
|
12 |
13 |
SurfaceView surface = (SurfaceView)findViewById(R.id.surfaceView);
|
14 |
SurfaceHolder holder = surface.getHolder();
|
15 |
holder.addCallback( this );
|
16 |
holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
|
17 |
holder.setFixedSize( 400 , 300 );
|
18 |
}
|
19 |
20 |
public
void surfaceCreated(SurfaceHolder holder) {
|
21 |
try
{
|
22 |
camera.setPreviewDisplay(holder);
|
23 |
camera.startPreview();
|
24 |
// TODO Draw over the preview if required.
|
25 |
} catch
(IOException e) {
|
26 |
Log.d(TAG, “IO Exception”, e);
|
27 |
}
|
28 |
}
|
29 |
30 |
31 |
public
void surfaceDestroyed(SurfaceHolder holder) {
|
32 |
camera.stopPreview();
|
33 |
}
|
34 |
35 |
public
void surfaceChanged(SurfaceHolder holder, int
format,
|
36 |
int
width, int height) {
|
37 |
}
|
38 |
39 |
@Override
|
40 |
protected
void
onPause() {
|
41 |
super .onPause();
|
42 |
camera.release();
|
43 |
}
|
44 |
45 |
@Override
|
46 |
protected
void
onResume() {
|
47 |
super .onResume();
|
48 |
camera = Camera.open();
|
49 |
}
|
50 |
} |
调用camera的setPreviewCallback方法,传入一个PreviewCallback的实现,重写onPreviewFrame方法。
01 |
camera.setPreviewCallback( new
PreviewCallback() {
|
02 |
public
void onPreviewFrame( byte [] data, Camera camera) {
|
03 |
int
quality = 60 ;
|
04 |
05 |
Size previewSize = camera.getParameters().getPreviewSize();
|
06 |
YuvImage image = new
YuvImage(data, ImageFormat.NV21,
|
07 |
previewSize.width, previewSize.height, null );
|
08 |
ByteArrayOutputStream outputStream = new
ByteArrayOutputStream();
|
09 |
10 |
image.compressToJpeg(
|
11 |
new
Rect( 0 , 0 ,previewSize.width, previewSize.height),
|
12 |
quality, outputStream);
|
13 |
14 |
// TODO Do something with the preview image.
|
15 |
}
|
16 |
}); |
Android 4.0加入了人脸识别的API这里就不多说了。
拍照
前面这些都配置过了,那么如何拍照呢?
答:使用camera对象的takePicture方法,传入一个ShutterCallback和两个PictureCallback实现(一个为了RAW,另外一个为了JPEG编码的图像)。
例子:框架代码,拍照和保存JPEG图像到SD卡:
01 |
private void
takePicture() {
|
02 |
camera.takePicture(shutterCallback, rawCallback,
jpegCallback);
|
03 |
} |
04 |
05 |
ShutterCallback shutterCallback =
new ShutterCallback() {
|
06 |
public void onShutter() {
|
07 |
// TODO Do something when the shutter closes.
|
08 |
}
|
09 |
}; |
10 |
11 |
PictureCallback rawCallback =
new PictureCallback() {
|
12 |
public void onPictureTaken( byte [] data, Camera camera) {
|
13 |
// TODO Do something with the image RAW data.
|
14 |
}
|
15 |
}; |
16 |
17 |
PictureCallback jpegCallback =
new PictureCallback() {
|
18 |
public void onPictureTaken( byte [] data, Camera camera) {
|
19 |
// Save the image JPEG data to the SD card
|
20 |
FileOutputStream outStream = null ;
|
21 |
try {
|
22 |
String path = Environment.getExternalStorageDirectory() +
|
23 |
“\test.jpg”;
|
24 |
25 |
outStream = new FileOutputStream(path);
|
26 |
outStream.write(data);
|
27 |
outStream.close();
|
28 |
}
catch (FileNotFoundException e) {
|
29 |
Log.e(TAG, “File Note Found”, e);
|
30 |
}
catch (IOException
e) {
|
31 |
Log.e(TAG, “IO Exception”, e);
|
32 |
}
|
33 |
}
|
34 |
}; |