Introducing Google Glass’s inevitable face recognition API
Have you ever forgotten someone’s name? How about slightly more detailed information such as their birthday? ... Maybe a few have even forgotten you and your significant other’s anniversary?
What if you could pull in all this data automatically just by looking at someone’s face? It’s a creepy prospect, but a seemingly inevitable reality thanks to Lambda Labs’ API for Google Glass.
Currently, Google has no rules against such usage, but there are rules about live streaming to a remote server. Therefore, users would have to snap a photo first before sending onwards to the server and pulling the resulted data in this way, causing some amount of delay.
Whilst we’re getting further and further into Terminator-style AR (if the military isn’t already working on it, there’s sure to be some targeting system in development) the API – currently in beta – appears to have some recognition issues.
You can test out the web demo here, but I’ve had mixed results. This ranges from not detecting faces wearing glasses, to thinking Arnold Schwarzenegger is Jennifer Aniston.
Part of the limitations is Lambda Labs cannot access any random individual’s personal information, therefore it requires some source data to go by.
Google+ offers similar functionality for tagging photos; suggesting people to tag groups of photos with at once. Perhaps a future extension to the service will hook into your own “Circles” on the service to pull in information for the people you actually know.
Can you think of any innovative uses for facial recognition in your future Google Glass apps?
- » Samsung wants game developers on Tizen – launches competition
- » Qt launches lightweight IoT development framework
- » Opinion: Google's Andromeda OS sounds 2.537 million light years ahead of rivals
- » Is the container hype really justified for developers?
- » Opinion: Nintendo knew it had to Switch it up