Three years ago, I wrote about a Google patent that examines Photo Retouching searcher responses to search results and ranks those results in a post on biometric parameters as a ranking signal for Google search results. Biometrics ranking based on machine learning from smartphone cameras Since then, I've been keeping an eye out for patent applications from Google that try to better understand the user's facial expressions on the device using the camera of the smartphone. advertisement Continue reading below A patent application has been filed for such a process. Most people wonder if they can comfortably use the process described in this new patent application. An overview of this new patented approach is one of the shortest I've seen, saying: "Some computing devices (mobile phones, tablet computers, etc.) have graphical keyboards, handwriting recognition systems, voice text systems, Photo Retouching and other types of user interfaces (" UI ") for creating electronic documents and messages.
Offers. Such user interfaces include not only text, but also other Photo Retouching limited forms of media content scattered throughout (eg, emotional icons or so-called "pictograms", graphic images, voice input, and other types of media. Content) may be provided as a way to enter. The text of the document or message. " The patent applications are as follows: Graphical image search based on the emotional state of the user of the computing device Inventor: Matthias Grundmann, Karthik Raveendran, Daniel Castro Chin US Patent Application: 20190228031 Publication Date: July 25, 2019 Submission Date: January 28, 2019 Overview A computing device including a camera configured to capture a user's image of a computing device, a memory configured to store the user's image, at Photo Retouching least one processor, and at least one module is described.
At least one module obtains a display of a user's image of a computing device from memory, determines a first emotion classification tag based on the image, and identifies based on it by at least one processor. It is operational. At least one graphic image from a Photo Retouching database of preclassified images with the first emotion classification tag and the emotion classification associated with the first emotion classification tag.