Blogs arrow How to do Face Recognition in Android with Google Mobile Vision API

How to do Face Recognition in Android with Google Mobile Vision API


Feb 20, 2021


In the release of Google Play services 7.8+, we can use new mobile vision APIs which provide  new face detection APIs that identify human faces in image and video better and faster with several advantages.


  • Understanding faces at different orientations
  • Detecting facial features
  • Understanding facial expressions

    Face Detection vs Facial Recognition

    • Face detection is a computer technology that determines the locations and sizes in arbitrary images.
    • A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image.
    • In general face detection extracts people’s faces in images but face recognition tries to find out who they are?

    Creating an App that detects faces

    • Open Android Studio, Select File–>New–>New Project–>Empty Activity and click next.
    • Then Enter the Proper Project name and select a language like java and click finish. The App will open when the build is successful.
    • We can see that our activity_main layout contains a single node. Delete this and replace with the following 

            android:layout_alignParentStart=“true” />

    • We should edit our AndroidManifest.xml file at this point with the following line between <application>…</application>:

        android:value=”face” />

    • Add the following dependency to the build.Gradle(:app):

    dependencies {
        implementation ‘’

    • Download Google Play Services SDK tool: In Android Studio, Tools–>SDK Manager–>SDK Tools
    • The app has been created now, we are just going to process the image that is already present on the app


    This application has a single button that will load the image, detect any faces on it, and draw a red rectangle around them. Let’s write the code to achieve this:

    Create Button Click Listener

    In your in your onCreate method, add the following code

    Button btn = (Button) findViewById(;
    btn.setOnClickListener(new View.OnClickListener() {
            public void onClick(View v) {

    This sets up the event handler (onClick)  when the user presses the button. When they do that, we want to load the image, process it for faces, and draw a red rectangle over any faces it finds.

    Load the Image From Resource

    • Now we are going to write on the image like drawing a red rectangle over any detected faces and we are loading the image. 
    • We need to make sure that the bitmap is mutable.
    • First we get a handle on the ImageView control for later use. Then, we use a BitMapFactory to load the bitmap. 
    • Beware it’s accessible in the resources using R.drawable.test1. If you have used a different name for your image, make sure to replace the test1 with your name.

    ImageView myImageView = (ImageView) findViewById(;
    BitmapFactory.Options options = new BitmapFactory.Options();
    Bitmap myBitmap = BitmapFactory.decodeResource(

    Create a Paint Object

    • The paint object is used for drawing on the image.
    • Here we have implemented a stroke width of 5 pixels and style of the stroke.
    • Hence it only draws in the outline of the image.

    Paint myRectPaint = new Paint();

    Create a Canvas Object

    • Here we set up a temp bitmap using the original.
    • From the temp bitmap we can create a new canvas and draw the bitmap on it.

    Bitmap tempBitmap = Bitmap.createBitmap(myBitmap.getWidth(), myBitmap.getHeight(), Bitmap.Config.RGB_565);
    Canvas tempCanvas = new Canvas(tempBitmap);
    tempCanvas.drawBitmap(myBitmap, 0, 0, null);

    Create the Face Detector

    • Here we create a new FaceDetector object using its builder.
    • We added the dependency to AndroidManifest.xml so that the libraries would be available during implementation.
    • It’s possible that, the first time our face detector runs, google play services won’t be ready to process faces yet.So we need to check if our detector is operational before we implement it.

    FaceDetector faceDetector = new FaceDetector.Builder(getApplicationContext()).setTrackingEnabled(false)
      new AlertDialog.Builder(v.getContext()).setMessage(“Could not set up the face detector!”).show();

    • Here the app is detecting a face on a still frame, hence no tracking is necessary. If we are detecting faces in video or on a live preview from camera, we should set trackingEnabled on the faceDetector to ‘true’.

    Detect the Faces

    • Now to detect the faces create a frame using the bitmap.
    • Then call the detect method on the face detector, using the frame, to get a sparse array of face objects.

    Frame frame = new Frame.Builder().setBitmap(myBitmap).build();
    SparseArray<Face> faces = faceDetector.detect(frame);

    Draw Rectangles on the Faces

    • Now we have a sparse array of faces.
    • We can iterate through this array to get the coordinates of the bounding rectangle for the face.
    • The API returns X, Y coordinates of the top left corner, as well as the width and height.
    • The Rectangle requires X, Y of the top left and bottom right corners.
    • So we have to calculate the bottom right using the top left, width, and height 

    for(int i=0; i<faces.size(); i++) {
      Face thisFace = faces.valueAt(i);
      float x1 = thisFace.getPosition().x;
      float y1 = thisFace.getPosition().y;
      float x2 = x1 + thisFace.getWidth();
      float y2 = y1 + thisFace.getHeight();
      tempCanvas.drawRoundRect(new RectF(x1, y1, x2, y2), 2, 2, myRectPaint);
    myImageView.setImageDrawable(new BitmapDrawable(getResources(),tempBitmap));


    Now all you have to do is run the app. So, for example, if you use the face2.jpg from earlier, you’ll see that the man face is detected.



    Manikandan M



    Naveen Lingam

    More Blogs