I have just realised that the pics I used in my last post about face recognition could not have been from the sensor that I was referring to [the one that had its own built in face recognition circuitry]: because that sensor did not have a camera. It could not take any pictures. It could only receive a visual input, assess that input, and send a signal if it thought there were any faces present.
So this post is where those pictures should have been: they were taken by a camera, and then each picture that the camera generated was assessed to see if there were any faces present, and green bounding boxes are then drawn round the part(s) of the picture that are considered to contain faces. A subtly different way of doing things.
This two-step approach requires 1) setting up a camera and being able to get it to take a picture and then 2) assessing that picture for faces. Neither step is super-tricky, but both are at the fiddly end of the python coding spectrum. Put it another way, I needed some help with both stages (thanks Dr Footleg!).
So the above image is the very first picture that I managed to take automatically by using python code to fire the camera. As you can see it is upside down: I had yet to work out how to rotate the image. You can also see that it was late into the night and I am looking a bit incredulous – it is finally working!
Later I managed to get better pictures, and then add the face recognition analysis (see previous pics!). Complicated? Yes, fairly, but by no means impossible. Reliable? Not totally. Worth doing? Well, you’ve got to start somewhere.