Blogs arrow How to do Face Recognition in Android with Google Mobile Vision API

How to do Face Recognition in Android with Google Mobile Vision API

line

Feb 20, 2021

Introduction

In the release of Google Play services 7.8+, we can use new mobile vision APIs which provide  new face detection APIs that identify human faces in image and video better and faster with several advantages.

Advantages

  • Understanding faces at different orientations
  • Detecting facial features
  • Understanding facial expressions

    Face Detection vs Facial Recognition

    • Face detection is a computer technology that determines the locations and sizes in arbitrary images.
    • A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image.
    • In general face detection extracts people’s faces in images but face recognition tries to find out who they are?

    Creating an App that detects faces

    • Open Android Studio, Select File–>New–>New Project–>Empty Activity and click next.
    create-project
    • Then Enter the Proper Project name and select a language like java and click finish. The App will open when the build is successful.
    app-name
    • We can see that our activity_main layout contains a single node. Delete this and replace with the following 

    <Button
            android:layout_width=“wrap_content”
            android:layout_height=“wrap_content”
            android:text=“Process”
            android:id=“@+id/button”
            android:layout_alignParentTop=“true”
            android:layout_alignParentStart=“true” />
        <ImageView
            android:layout_width=“wrap_content”
            android:layout_height=“wrap_content”
            android:id=“@+id/imgview”/>

    • We should edit our AndroidManifest.xml file at this point with the following line between <application>…</application>:

    <meta-data
        android:name=”com.google.android.gms.vision.DEPENDENCIES”
        android:value=”face” />

    • Add the following dependency to the build.Gradle(:app):

    dependencies {
        implementation ‘com.google.android.gms:play-services-vision:20.1.0’
    }

    • Download Google Play Services SDK tool: In Android Studio, Tools–>SDK Manager–>SDK Tools
    sdk
    • The app has been created now, we are just going to process the image that is already present on the app

    Implementation

    This application has a single button that will load the image, detect any faces on it, and draw a red rectangle around them. Let’s write the code to achieve this:

    Create Button Click Listener

    In your MainActivity.java in your onCreate method, add the following code

    Button btn = (Button) findViewById(R.id.button);
    btn.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
            }
    });

    This sets up the event handler (onClick)  when the user presses the button. When they do that, we want to load the image, process it for faces, and draw a red rectangle over any faces it finds.

    Load the Image From Resource

    • Now we are going to write on the image like drawing a red rectangle over any detected faces and we are loading the image. 
    • We need to make sure that the bitmap is mutable.
    • First we get a handle on the ImageView control for later use. Then, we use a BitMapFactory to load the bitmap. 
    • Beware it’s accessible in the resources using R.drawable.test1. If you have used a different name for your image, make sure to replace the test1 with your name.

    ImageView myImageView = (ImageView) findViewById(R.id.imgview);
    BitmapFactory.Options options = new BitmapFactory.Options();
    options.inMutable=true;
    Bitmap myBitmap = BitmapFactory.decodeResource(
            getApplicationContext().getResources(),
            R.drawable.test1,
            options);

    Create a Paint Object

    • The paint object is used for drawing on the image.
    • Here we have implemented a stroke width of 5 pixels and style of the stroke.
    • Hence it only draws in the outline of the image.

    Paint myRectPaint = new Paint();
    myRectPaint.setStrokeWidth(5);
    myRectPaint.setColor(Color.RED);
    myRectPaint.setStyle(Paint.Style.STROKE);

    Create a Canvas Object

    • Here we set up a temp bitmap using the original.
    • From the temp bitmap we can create a new canvas and draw the bitmap on it.

    Bitmap tempBitmap = Bitmap.createBitmap(myBitmap.getWidth(), myBitmap.getHeight(), Bitmap.Config.RGB_565);
    Canvas tempCanvas = new Canvas(tempBitmap);
    tempCanvas.drawBitmap(myBitmap, 0, 0, null);

    Create the Face Detector

    • Here we create a new FaceDetector object using its builder.
    • We added the dependency to AndroidManifest.xml so that the libraries would be available during implementation.
    • It’s possible that, the first time our face detector runs, google play services won’t be ready to process faces yet.So we need to check if our detector is operational before we implement it.

    FaceDetector faceDetector = new FaceDetector.Builder(getApplicationContext()).setTrackingEnabled(false)
    .build();
    if(!faceDetector.isOperational()){
      new AlertDialog.Builder(v.getContext()).setMessage(“Could not set up the face detector!”).show();
      return;
    }

    • Here the app is detecting a face on a still frame, hence no tracking is necessary. If we are detecting faces in video or on a live preview from camera, we should set trackingEnabled on the faceDetector to ‘true’.

    Detect the Faces

    • Now to detect the faces create a frame using the bitmap.
    • Then call the detect method on the face detector, using the frame, to get a sparse array of face objects.

    Frame frame = new Frame.Builder().setBitmap(myBitmap).build();
    SparseArray<Face> faces = faceDetector.detect(frame);

    Draw Rectangles on the Faces

    • Now we have a sparse array of faces.
    • We can iterate through this array to get the coordinates of the bounding rectangle for the face.
    • The API returns X, Y coordinates of the top left corner, as well as the width and height.
    • The Rectangle requires X, Y of the top left and bottom right corners.
    • So we have to calculate the bottom right using the top left, width, and height 

    for(int i=0; i<faces.size(); i++) {
      Face thisFace = faces.valueAt(i);
      float x1 = thisFace.getPosition().x;
      float y1 = thisFace.getPosition().y;
      float x2 = x1 + thisFace.getWidth();
      float y2 = y1 + thisFace.getHeight();
      tempCanvas.drawRoundRect(new RectF(x1, y1, x2, y2), 2, 2, myRectPaint);
    }
    myImageView.setImageDrawable(new BitmapDrawable(getResources(),tempBitmap));

    Conclusion

    Now all you have to do is run the app. So, for example, if you use the face2.jpg from earlier, you’ll see that the man face is detected.

    mani-pic

    WRITTEN BY

    Manikandan M

    lingam-pic

    REVIEWED BY

    Naveen Lingam

    More Blogs

    line