Google pixel 6 pro vs Google pixel 6



Comparing an image using a facial recognition algorithm has come a long way and Google + Deep Learning is one of the most common technologies being used to find machine learning solutions and machine-generated images. I am here to explain the comparison of two images as part of a video project I carried out in couple of days in Google’s x.ai studio + YouTube.

Taking two images of which I present the comparison of one and the other are in the images below. To make the comparison I used 50 pairs of pixel data in the dataset as above.

This comparison shows the results of a neural network that the models are trained on with only one feature between the two images. For the next generation this will be a common benchmark problem in a more comprehensive machine learning benchmark .

Comparing the solution of the two images visually

Android device smartphone APIs with the face detection algorithms

An image of the face detection using the face detection API in Android

An image of the face detection in the face detection API in iOS

Datasets:

Fit.ai in

<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-4619222880028818"
     crossorigin="anonymous"></script>

TrueTypes in a smartphone device without face detection programs

Face detection with Google + Deep Learning in Android and iOS

Higher image quality at a better pixel size

Intriguing results in higher image resolution

A thought process about the data

Difference between the two algorithms

We have 3 different datasets to apply our machine learning algorithm to. Both are designed to have a good prediction. This might come in a slightly different situation in which we would make an algorithm to utilize data to make a prediction. In the dataset here we have 2 images with face detection as a backend. Whereas face detection as a foreground feature. On average we shouldn’t see much difference between the types of face detection algorithm

No distortions

Shape of faces

Overall human recognition

Using a different data points from the face detection algorithm for that which the link above,

The difference in face detection model could be due to its underlying architecture. An Eye Detection Model would depend more on Face Detection Model. It would be harder for the model which depends on the Face Detection model to recognize different eye makeup

Topological mapping with different image in an app

Furthermore we have the intermediate data points in the android model which only have face detection as a layer between the image and the real photograph which do not have more than one element. Similarly an app or face detection bot could have that which have no face detection as a layer for more than one object.

Face detection using a traditional model under TrueTypes in an iOS app

Topological mapping or pitch and yaw mapping

Moreover the different dimension of the face is also a big difference

Conclusions

What I have shown in the video shows that the way that we analyze an image will not give out some remarkable difference when the data’s extracted from certain images. If we use faces on the model’s baseline we are going to be able to see the results even more as they are in different datasets. Some major issues that this technique had are (1) face detection methods might need to be customized to get the best accuracy.

2) Differentities in the background angle and the eye illumination

3) Differential correlation between eye detection and face detection algorithms

Additional experiments include

Pre-trained training set with limited complexity

Coloring the face detection of the bounding box on the face detection endpoint using quite realistic facial shading

Non-rectangular blind mirrors on the faces of the faces to boost the accuracy of the detection accuracy.

In this phase I have tried to give a behind the scenes visualization of the final result and what you can see in the video is that it is a mix of different algorithms. This takes a bit more data point storage then is needed.

On top of that we had bounding box images. Which caused some tough optimization issues as we were doing a test of machine learning models on some kinds of dataset for different algorithms.

The logo for Eye detection and face detection in an android device

Conclusion:

The most fascinating fact about face detection is that it is an estimative model after all. In the end, this project is a success. I hope to bring back some really good results. Something you will see similar stuff in YouTube of making use of your models.

This is an article I wrote with my team for Keras with a view towards establishing a more efficient machine learning approach on an a/b term. If you have any suggestions we can use too.

Comments

Popular Posts