本文主要介绍使用Google自带的FaceDetectionListener进行人脸检测,并将检测到的人脸用矩形框绘制出来。本文代码基于PlayCameraV1.0.0,在Camera的open和preview流程上进行了改动。原先是放在单独线程里,这次我又把它放到Surfaceview的生命周期里进行打开和开预览。
首先要反省下,去年就推出了静态图片的人脸检测demo,当时许诺一周内推出Camera预览实时检测并绘制的demo,结果拖到现在才整。哎,屌丝一天又一天,蹉跎啊。在demo制作过程中还是遇到了一些麻烦的,第一个问题是检测到人脸rect默认是以预览界面为坐标系,这个坐标系是经过变换的,中心点为(0,
0),左上顶点坐标是(-1000, -1000),右下顶点是(1000, 1000).也就是说不管预览预览Surfaceview多大,检测出来的rect的坐标始终对应的是在这个变换坐标系。而Android里默认的view的坐标系是,左上顶点为(0,
0),横为x轴,竖为y轴。这就需要把rect坐标变换下。另一个难点是,这个人脸检测必须在camera开启后进行start,如果一旦拍照或停预览,则需要再次**。**时需要加个延迟,否则的话就不起作用了。
另外,仍要交代下,在预览界面实时检测人脸并绘制(基于Google自带算法),还是有两个思路的。一是在PreviewCallback里的onPreviewFrame里得到yuv数据后,转成rgb后再转成Bitmap,然后利用静态图片的人脸检测流程,即利用FaceDetector类进行检测。另一个思路是,直接实现FaceDetectionListener接口,这样在onFaceDetection()里就得到检测到的人脸Face[]
faces数据了。这里只需控制何时start,何时stop即可,这都是android标准接口。毫无疑问,这种方法是上选。从Android4.0后android源码里的camera app都是用的这个接口进行人脸检测。下面上源码:
一、GoogleFaceDetect.java
考虑到下次准备介绍JNI里用OpenCV检测人脸,为此杂家新建了一个包org.yanzi.mode里面准备放所有的关于图像的东西。新建文件GoogleFaceDetect.Java实现FaceDetectionListener,在构造函数里传进来一个Handler,将检测到的人脸数据发给Activity,经Activity中转再刷新UI.
-
<span style="font-family:Comic Sans MS;font-size:18px;">package org.yanzi.mode;
-
-
import org.yanzi.util.EventUtil;
-
-
import android.content.Context;
-
import android.hardware.Camera;
-
import android.hardware.Camera.Face;
-
import android.hardware.Camera.FaceDetectionListener;
-
import android.os.Handler;
-
import android.os.Message;
-
import android.util.Log;
-
-
public class GoogleFaceDetect implements FaceDetectionListener {
-
private static final String TAG = "YanZi";
-
private Context mContext;
-
private Handler mHander;
-
public GoogleFaceDetect(Context c, Handler handler){
-
mContext = c;
-
mHander = handler;
-
}
-
@Override
-
public void onFaceDetection(Face[] faces, Camera camera) {
-
// TODO Auto-generated method stub
-
-
Log.i(TAG, "onFaceDetection...");
-
if(faces != null){
-
-
Message m = mHander.obtainMessage();
-
m.what = EventUtil.UPDATE_FACE_RECT;
-
m.obj = faces;
-
m.sendToTarget();
-
}
-
}
-
-
/* private Rect getPropUIFaceRect(Rect r){
-
Log.i(TAG, "人脸检测 = " + r.flattenToString());
-
Matrix m = new Matrix();
-
boolean mirror = false;
-
m.setScale(mirror ? -1 : 1, 1);
-
Point p = DisplayUtil.getScreenMetrics(mContext);
-
int uiWidth = p.x;
-
int uiHeight = p.y;
-
m.postScale(uiWidth/2000f, uiHeight/2000f);
-
int leftNew = (r.left + 1000)*uiWidth/2000;
-
int topNew = (r.top + 1000)*uiHeight/2000;
-
int rightNew = (r.right + 1000)*uiWidth/2000;
-
int bottomNew = (r.bottom + 1000)*uiHeight/2000;
-
-
return new Rect(leftNew, topNew, rightNew, bottomNew);
-
}*/
-
-
}
-
</span>
上面代码注释掉的一部分是我最初想自己写矩阵变换算法的过程,一番努力感觉变换后坐标还是有问题,后来参考Android4.0里的Camera APP源码才解决.这个变换转移到了FaceView里。
二、FaceView.java
这个类继承ImageView,用来将Face[] 数据的rect取出来,变换后刷新到UI上。
-
<span style="font-family:Comic Sans MS;font-size:18px;">package org.yanzi.ui;
-
-
import org.yanzi.camera.CameraInterface;
-
import org.yanzi.playcamera.R;
-
import org.yanzi.util.Util;
-
-
import android.content.Context;
-
import android.graphics.Canvas;
-
import android.graphics.Color;
-
import android.graphics.Matrix;
-
import android.graphics.Paint;
-
import android.graphics.Paint.Style;
-
import android.graphics.RectF;
-
import android.graphics.drawable.Drawable;
-
import android.hardware.Camera.CameraInfo;
-
import android.hardware.Camera.Face;
-
import android.util.AttributeSet;
-
import android.widget.ImageView;
-
-
public class FaceView extends ImageView {
-
private static final String TAG = "YanZi";
-
private Context mContext;
-
private Paint mLinePaint;
-
private Face[] mFaces;
-
private Matrix mMatrix = new Matrix();
-
private RectF mRect = new RectF();
-
private Drawable mFaceIndicator = null;
-
public FaceView(Context context, AttributeSet attrs) {
-
super(context, attrs);
-
// TODO Auto-generated constructor stub
-
initPaint();
-
mContext = context;
-
mFaceIndicator = getResources().getDrawable(R.drawable.ic_face_find_2);
-
}
-
-
-
public void setFaces(Face[] faces){
-
this.mFaces = faces;
-
invalidate();
-
}
-
public void clearFaces(){
-
mFaces = null;
-
invalidate();
-
}
-
-
-
@Override
-
protected void onDraw(Canvas canvas) {
-
// TODO Auto-generated method stub
-
if(mFaces == null || mFaces.length < 1){
-
return;
-
}
-
boolean isMirror = false;
-
int Id = CameraInterface.getInstance().getCameraId();
-
if(Id == CameraInfo.CAMERA_FACING_BACK){
-
isMirror = false; //后置Camera无需mirror
-
}else if(Id == CameraInfo.CAMERA_FACING_FRONT){
-
isMirror = true; //前置Camera需要mirror
-
}
-
Util.prepareMatrix(mMatrix, isMirror, 90, getWidth(), getHeight());
-
canvas.save();
-
mMatrix.postRotate(0); //Matrix.postRotate默认是顺时针
-
canvas.rotate(-0); //Canvas.rotate()默认是逆时针
-
for(int i = 0; i< mFaces.length; i++){
-
mRect.set(mFaces[i].rect);
-
mMatrix.mapRect(mRect);
-
mFaceIndicator.setBounds(Math.round(mRect.left), Math.round(mRect.top),
-
Math.round(mRect.right), Math.round(mRect.bottom));
-
mFaceIndicator.draw(canvas);
-
// canvas.drawRect(mRect, mLinePaint);
-
}
-
canvas.restore();
-
super.onDraw(canvas);
-
}
-
-
private void initPaint(){
-
mLinePaint = new Paint(Paint.ANTI_ALIAS_FLAG);
-
// int color = Color.rgb(0, 150, 255);
-
int color = Color.rgb(98, 212, 68);
-
// mLinePaint.setColor(Color.RED);
-
mLinePaint.setColor(color);
-
mLinePaint.setStyle(Style.STROKE);
-
mLinePaint.setStrokeWidth(5f);
-
mLinePaint.setAlpha(180);
-
}
-
}
-
</span>
注意事项有两个:
1.就是Rect变换问题,通过Util.prepareMatrix(mMatrix, isMirror, 90, getWidth(), getHeight());进行变换,为了解决人脸检测坐标系和实际绘制坐标系不一致问题。第三个参数90,是因为前手摄像头都设置了mCamera.setDisplayOrientation(90);
接下来的Matrix和canvas两个旋转我传的都是0,所以此demo只能在手机0、90、180、270四个标准角度下得到的人脸坐标是正确的。其他情况下,需要将OrientationEventListener得到的角度传过来。为了简单,我这块就么写,OrientationEventListener的用法参见我的前文,后续将再推出一个demo。
最终是通过mMatrix.mapRect(mRect);来将mRect变换成UI坐标系的人脸Rect.
Util.prepareMatrix()代码如下:
-
<span style="font-family:Comic Sans MS;font-size:18px;">package org.yanzi.util;
-
-
import android.graphics.Matrix;
-
-
public class Util {
-
public static void prepareMatrix(Matrix matrix, boolean mirror, int displayOrientation,
-
int viewWidth, int viewHeight) {
-
// Need mirror for front camera.
-
matrix.setScale(mirror ? -1 : 1, 1);
-
// This is the value for android.hardware.Camera.setDisplayOrientation.
-
matrix.postRotate(displayOrientation);
-
// Camera driver coordinates range from (-1000, -1000) to (1000, 1000).
-
// UI coordinates range from (0, 0) to (width, height).
-
matrix.postScale(viewWidth / 2000f, viewHeight / 2000f);
-
matrix.postTranslate(viewWidth / 2f, viewHeight / 2f);
-
}
-
}
-
</span>
2.得到实际UI里的人脸rect怎么画的问题。之前都是通过paint直接画,但实际上也可以通过Drawable.draw(canvas)来画。后者的好处是将一个图片画上去,而通过paint绘制基础图行如Rect、Circle比较方面。代码里把两种方法的代码都写了,供大家参考。
三.何时打开Camera,何时开预览?
本次将这两个流程放到了Surfaceview的两个生命周期里,因为之前放在单独Thread还是会有一些问题。如个别手机上,Surfaceview创建的很慢,这时的SurfaceHolder还没准备好,结果Camera已经走到开预览了,导致黑屏问题。
-
<span style="font-family:Comic Sans MS;font-size:18px;">@Override
-
public void surfaceCreated(SurfaceHolder holder) {
-
// TODO Auto-generated method stub
-
Log.i(TAG, "surfaceCreated...");
-
CameraInterface.getInstance().doOpenCamera(null, CameraInfo.CAMERA_FACING_BACK);
-
}
-
-
@Override
-
public void surfaceChanged(SurfaceHolder holder, int format, int width,
-
int height) {
-
// TODO Auto-generated method stub
-
Log.i(TAG, "surfaceChanged...");
-
CameraInterface.getInstance().doStartPreview(mSurfaceHolder, 1.333f);
-
}</span>
四.何时注册并开始人脸检测?
若要开启人脸检测,必须要在Camera已经startPreview完毕之后。本文暂时采用在onCreate里延迟1.5s开启人脸检测,1.5s基本上camera已经开预览了。后续准备将Handler传到Surfaceview里,在开预览后通过Handler通知Activity已经开启预览了。
自定义的MainHandler:
-
<span style="font-family:Comic Sans MS;font-size:18px;"> private class MainHandler extends Handler{
-
-
@Override
-
public void handleMessage(Message msg) {
-
// TODO Auto-generated method stub
-
switch (msg.what){
-
case EventUtil.UPDATE_FACE_RECT:
-
Face[] faces = (Face[]) msg.obj;
-
faceView.setFaces(faces);
-
break;
-
case EventUtil.CAMERA_HAS_STARTED_PREVIEW:
-
startGoogleFaceDetect();
-
break;
-
}
-
super.handleMessage(msg);
-
}
-
-
}</span>
在onCreate里:
-
<span style="font-family:Comic Sans MS;font-size:18px;"> protected void onCreate(Bundle savedInstanceState) {
-
super.onCreate(savedInstanceState);
-
-
setContentView(R.layout.activity_camera);
-
initUI();
-
initViewParams();
-
mMainHandler = new MainHandler();
-
googleFaceDetect = new GoogleFaceDetect(getApplicationContext(), mMainHandler);
-
-
-
shutterBtn.setOnClickListener(new BtnListeners());
-
switchBtn.setOnClickListener(new BtnListeners());
-
mMainHandler.sendEmptyMessageDelayed(EventUtil.CAMERA_HAS_STARTED_PREVIEW, 1500);
-
}</span>
这里写了两个重要的方法分别是开始检测和停止检测:
-
<span style="font-family:Comic Sans MS;font-size:18px;">private void startGoogleFaceDetect(){
-
Camera.Parameters params = CameraInterface.getInstance().getCameraParams();
-
if(params.getMaxNumDetectedFaces() > 0){
-
if(faceView != null){
-
faceView.clearFaces();
-
faceView.setVisibility(View.VISIBLE);
-
}
-
CameraInterface.getInstance().getCameraDevice().setFaceDetectionListener(googleFaceDetect);
-
CameraInterface.getInstance().getCameraDevice().startFaceDetection();
-
}
-
}
-
private void stopGoogleFaceDetect(){
-
Camera.Parameters params = CameraInterface.getInstance().getCameraParams();
-
if(params.getMaxNumDetectedFaces() > 0){
-
CameraInterface.getInstance().getCameraDevice().setFaceDetectionListener(null);
-
CameraInterface.getInstance().getCameraDevice().stopFaceDetection();
-
faceView.clearFaces();
-
}
-
}</span>
五.人脸检测如何和拍照及前后摄像头切换协调同步?
先来看下官方对startFaceDetection()一段注释:
-
<span style="font-family:Comic Sans MS;font-size:18px;"> /**
-
* Starts the face detection. This should be called after preview is started.
-
* The camera will notify {@link FaceDetectionListener} of the detected
-
* faces in the preview frame. The detected faces may be the same as the
-
* previous ones. Applications should call {@link #stopFaceDetection} to
-
* stop the face detection. This method is supported if {@link
-
* Parameters#getMaxNumDetectedFaces()} returns a number larger than 0.
-
* If the face detection has started, apps should not call this again.
-
*
-
* <p>When the face detection is running, {@link Parameters#setWhiteBalance(String)},
-
* {@link Parameters#setFocusAreas(List)}, and {@link Parameters#setMeteringAreas(List)}
-
* have no effect. The camera uses the detected faces to do auto-white balance,
-
* auto exposure, and autofocus.
-
*
-
* <p>If the apps call {@link #autoFocus(AutoFocusCallback)}, the camera
-
* will stop sending face callbacks. The last face callback indicates the
-
* areas used to do autofocus. After focus completes, face detection will
-
* resume sending face callbacks. If the apps call {@link
-
* #cancelAutoFocus()}, the face callbacks will also resume.</p>
-
*
-
* <p>After calling {@link #takePicture(Camera.ShutterCallback, Camera.PictureCallback,
-
* Camera.PictureCallback)} or {@link #stopPreview()}, and then resuming
-
* preview with {@link #startPreview()}, the apps should call this method
-
* again to resume face detection.</p>
-
*
-
* @throws IllegalArgumentException if the face detection is unsupported.
-
* @throws RuntimeException if the method fails or the face detection is
-
* already running.
-
* @see FaceDetectionListener
-
* @see #stopFaceDetection()
-
* @see Parameters#getMaxNumDetectedFaces()
-
*/</span>
相信大家都能看懂,杂家就不一句一句翻了。关键信息是,在调用takePicture和stopPreview时,必须重新start来恢复人脸检测。而在拍照前是不需要手动stop的。经杂家测试,手动stop反而会坏事。另外就是takePicture之后(实际上camera做了stopPreview和startPreview),不能立即startFaceDetection(),如果立即做是没有效果的,必须加个延时。
-
<span style="font-family:Comic Sans MS;font-size:18px;"> private void takePicture(){
-
CameraInterface.getInstance().doTakePicture();
-
mMainHandler.sendEmptyMessageDelayed(EventUtil.CAMERA_HAS_STARTED_PREVIEW, 1500);
-
}</span>
第二个问题是在Camera切换之后,Camera的实例发生了变化。必须调用stopFaceDetection(),在此之前调用setFaceDetectionListener(null)将其监听置为null。再切换过来重新预览后,再次start。
-
<span style="font-family:Comic Sans MS;font-size:18px;"> private void switchCamera(){
-
stopGoogleFaceDetect();
-
int newId = (CameraInterface.getInstance().getCameraId() + 1)%2;
-
CameraInterface.getInstance().doStopCamera();
-
CameraInterface.getInstance().doOpenCamera(null, newId);
-
CameraInterface.getInstance().doStartPreview(surfaceView.getSurfaceHolder(), previewRate);
-
startGoogleFaceDetect();
-
-
}</span>
其他代码变化不大,杂家就不一一贴出来了,想看的请看源码。下面上效果图:
下图为预览界面,拍照图片和切换图片直接换成了Android4.4原生的,原来的实在太丑了。
下图为直接把Camera对着电视剧的检测效果:
再来一张,对着电脑里的图片:
很多人质疑google自带检测算法效果太弱,但就测试结果来看,人家做的已经很牛逼了。待下次推出opencv的demo后用同样的预览画面来对比分析。其实大多人抱怨的是google只提供了检测,没有提供识别和认证。不过,它要啥都提供,那杂家就没饭吃了。但可以预见,不久的将来该来的总会来。
补充说明:有的手机上尽管是4.0以上的版本,但可能仍然不支持这种人脸检测接口。个别手机后置camera支持良好,但前置Camera不支持,如中兴的Geek,切到前置Camera后直接报Camera Server died,ICamera died。这种问题只能骂手机厂家,不要喷我的代码哈。
--------------------本文系原创,转载请注明作者:yanzi1225627