{"id":29275,"date":"2012-06-11T14:44:16","date_gmt":"2012-06-11T06:44:16","guid":{"rendered":"http:\/\/techielobang.com\/blog\/?p=29275"},"modified":"2012-06-11T14:44:16","modified_gmt":"2012-06-11T06:44:16","slug":"webcam-used-for-avatar-to-mimic-user-facial-expression-video","status":"publish","type":"post","link":"https:\/\/techielobang.com\/blog\/2012\/06\/11\/webcam-used-for-avatar-to-mimic-user-facial-expression-video\/","title":{"rendered":"Webcam Used for Avatar to Mimic User Facial Expression (Video)"},"content":{"rendered":"<p>With technology, I guess there is no need for special camera to capture facial expression and input into an Avatar. Now, a Keio University group, led by Associate Professor Yasue Mitsukura, has developed a method for measuring which way a person is facing and how their expression changes.<\/p>\n<p><a href=\"https:\/\/i0.wp.com\/techielobang.com\/blog\/wp-content\/uploads\/2012\/06\/webcam-avatar.jpg\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" data-attachment-id=\"29276\" data-permalink=\"https:\/\/techielobang.com\/blog\/2012\/06\/11\/webcam-used-for-avatar-to-mimic-user-facial-expression-video\/webcam-avatar\/\" data-orig-file=\"https:\/\/i0.wp.com\/techielobang.com\/blog\/wp-content\/uploads\/2012\/06\/webcam-avatar.jpg?fit=600%2C342&amp;ssl=1\" data-orig-size=\"600,342\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}\" data-image-title=\"webcam-avatar\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/techielobang.com\/blog\/wp-content\/uploads\/2012\/06\/webcam-avatar.jpg?fit=600%2C342&amp;ssl=1\" class=\"aligncenter size-full wp-image-29276\" title=\"webcam-avatar\" src=\"https:\/\/i0.wp.com\/techielobang.com\/blog\/wp-content\/uploads\/2012\/06\/webcam-avatar.jpg?resize=600%2C342\" alt=\"\" width=\"600\" height=\"342\" srcset=\"https:\/\/i0.wp.com\/techielobang.com\/blog\/wp-content\/uploads\/2012\/06\/webcam-avatar.jpg?w=600&amp;ssl=1 600w, https:\/\/i0.wp.com\/techielobang.com\/blog\/wp-content\/uploads\/2012\/06\/webcam-avatar.jpg?resize=300%2C171&amp;ssl=1 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><\/p>\n<p><!--more-->Previously, I thought we need special markers on the face to allow the avatar to trace the facial expression. Now, it seems that a standard webcam can do the trick. Watch the video.<\/p>\n<p><iframe loading=\"lazy\" src=\"http:\/\/www.youtube.com\/embed\/CvievLytmrs\" frameborder=\"0\" width=\"600\" height=\"338\"><\/iframe><br \/>\n[ad#boxlist]<br \/>\n(<a href=\"http:\/\/www.diginfo.tv\/v\/12-0107-r-en.php\" target=\"_blank\">source<\/a>)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>With technology, I guess there is no need for special camera to capture facial expression and input into an Avatar. Now, a Keio University group, led by..<\/p>\n","protected":false},"author":1,"featured_media":29276,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_links_to":"","_links_to_target":""},"categories":[2785,14,3],"tags":[1592,8263,8261,8262,4414],"class_list":["post-29275","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-design","category-techie","category-technology","tag-avatar","tag-facial-expression","tag-keio","tag-mimic","tag-webcam"],"aioseo_notices":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/techielobang.com\/blog\/wp-content\/uploads\/2012\/06\/webcam-avatar.jpg?fit=600%2C342&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/p8YKZ-7Cb","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/posts\/29275","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/comments?post=29275"}],"version-history":[{"count":2,"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/posts\/29275\/revisions"}],"predecessor-version":[{"id":29278,"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/posts\/29275\/revisions\/29278"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/media\/29276"}],"wp:attachment":[{"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/media?parent=29275"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/categories?post=29275"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techielobang.com\/blog\/wp-json\/wp\/v2\/tags?post=29275"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}