• 検索結果がありません。

Chapter 4 User Study & Evaluation

4.3 Discussion & Limitation

31

The results of question 7 shows that 7/9 participants in our user study preferred proposed interface for retrieving motions. Among the rest of 2 participants, one said that he/she found that browsing directly in motion map (using the interface of comparative group) to search the motion when the small numbers of the motion data in the library is more convenient.

The results of question 8 illustrates that applying proposed interface to retrieve the simple motion such as gaits is more effective than the complex one. It is considered that the reason of none of the participant had answered running is because the trajectories’

map of the running motion has less difference from the gait motion. In other words, because of the static strokes fails to carry the speed information of the movement that running map are sometime misunderstood by the system to the gait map instead.

The results of question 9 illustrates that in most of cases, the participants prefer to define an entire skeleton motion by drawing the head and hands movement. To optimize the system and improve the effectiveness of the algorithm, it is not necessary to draw the feet movement from the skeleton movement on the trajectories’ map.

32

functionality of the ‘strokes width’ slider is seemed to be surplus.

Figure 4.9: Failure similarity matching happen in (a),(c),(d) and(f).

33

Figure 4.10: The query sketches for each category of target motions.

34

Chapter 5

Conclusion & Future Works

In this thesis, we proposed Sketch2Motion, a sketch-based interface to help the users to retrieve their expected motion in a motion data library rather than picking up the motion by only browsing its motion map or searching only by keywords. In our interface, the users were allowed to browse not only the 2 motion maps both in the absolution coordinate and the relative coordinate, but also the skeleton animation playback in a sub-window interactively by sketching the traces of the 5 key joints. We utilized a SIFT descriptor to measure the similarity between the strokes input and the trajectories’ map of motion in the data library. Besides, shadow-like guidance was provided behind the query sketch to support the users to draw the validation traces of the movement.

A user study is designed to certificate the effectiveness of the proposed sketch-based interface. In the comparative study, although the time cost of using our interface was in the same level of time cost of using the file-by-file browsing interface, we considered that it is because the sizes of the data library we constructed were too small that our interface was not able to perform better than another interface in terms of reducing time cost. It is convinced that our interface is more convenient in a motion library in a typical size with hundreds of motion categories.

In the evaluation study, a questionnaire was conducted to verify each functionality of the proposed UI using 5-points Likert-scales. According to the scores of the results, it is considered that the proposed shadow-like guidance and the animation playback in real-time are necessary when sketching to search for an expected motion. Both two of the motion maps (absolute coordinate and relative coordinate) were comparatively low scored, which proved that only browsing the static motion maps no matter in which of the coordinate system is not clear enough to understand an overall motion sequence.

Browsing the skeleton animation in real-time is necessary.

To understand the habit of the users when they defined motion with strokes, we observed all the query sketch of the participants. It is considered that the user has the same habit of animating a gait motion with drawing smooth curve, a running motion with a wavy line, and so on. we believed that each category of motion related to its shape of curves. Therefore, the proposed sketch-based interface is not only applicable to the motion categories mentioned in our user study but also to some other types of motion.

Our interface is simplified as much as the common user can draw it without difficulty.

35

The further goals of this research are introduced as follow:

 Optimizing the interface to apply for the complex motion. As mentioned in the discussion section, the proposed interface failed to well perform in retrieving the complex and redundant motion sequences such as dance. It is necessary to redraw the traces’ map by sliding along motion sequence into several short sequences or reducing the numbers of joints that were projected out on the trajectories’ map.

 For the limitation that our interface failed to meet the demands of depicting the speed of movement. The reason for the shortcut is that we used the SIFT descriptor to measure the similarity between the input query sketch and the image in the dataset, which is ignoring the temporal information on both sides. It is necessary to try a new motion trace arrangement method. For instance, measuring the length of a section of the trace and counting the numbers of keyframe sorted on the specific section to calculate the speed of joint movement on the trace.

 The proposed interface is only supporting the user for retrieving skeleton motion from the motion library, which is not enough for creating character animation. Figure 5.1 shows an example of covert AMC/ASF file to BVH file and playing in the Blender with the fixed structure of skeleton without retargeting to any character model. There are many pieces of research that are discussing the method to automatically achieve the retargeting tasks. One of the examples is K. Aberman’s [29]

work. However, it takes several hours to train a new character model even using their pre-trained DL framework. In our future work, we are about to look for a more suitable one to utilize as a plug-in of our sketch-based interface.

36

Figure 5.1: An example of covert AMC/ASF file to BVH file and playing in the Blender.

37

Acknowledgements

During the preparation of the master's thesis, I have received a lot of invaluable bits of help from many people. Their comments and advice contribute to the accomplishment of the thesis.

First of all, the profound gratitude should go to my supervisor Professor Kazunori Miyata and Senior Lecturer Haoran Xie for their teaching, supporting, and giving advice on my research. Professor Kazunori Miyata accepted me as a research student and gave me the chance to study and research with seniors in the Miyata Lab. He has inspired me when I was lost the direction in my research. His profound insight and accurateness about my paper taught me so much that they are engraved. He provided me with beneficial help and offered me precious comments during the whole process of my writing, without which the paper would not be what it is now.

Secondly, I would like to express my sincere gratitude to Senior Lecturer Haoran Xie for giving me the chance to participate in several cooperative-research programs which gives me great benefits. During the 2-and-half-years-long life in JAIST, Miyata Lab, I have had several chances to join the group research in the Lab with the other students in the laboratory. Fortunately, our works have been accepted by several journals and we proudly presented our work in front of other researchers in the same research field which is my precious memory.

Besides, by the honor of recommended, I gained an opportunity to take part in the research program which is cooperative with Igarashi Lab, The University of Tokyo, and the project is still advancing in progress. Meanwhile, I am sincerely grateful to Assistant Professor Tsukasa Fukusato, the Ph.D. students Chunqi Zhao and Toby Chong Long Hin in Igarashi Lab. The idea at the very beginning of this thesis would not be born without their inspiration in each brainstorming discussion.

Moreover, I would like to express my sincere gratitude to all the participants in the user study of this thesis. With their valuable suggestion and feedback, we finally learned lessons and gain experience in practice. This thesis would not be processed without their patient cooperation.

Last by not less, I would like to extend my deep gratefulness to my family who supported me and gave me the chance to study abroad in Japan which made my accomplishments possible.

38

Bibliography

[1] 2D Animation Software, Flash Animation | Adobe Animate. Adobe Animate, www.adobe.com/products/animate.html.

[2] Maya Software | Computer Animation & Modeling Software | Autodesk. Autodesk Maya,

www.autodesk.com/products/maya/overview?support=ADVANCED&plc=MAYA&

term=1-YEAR&quantity=1.

[3] Vicon | Award Winning Motion Capture Systems. Vicon, 13 Jan. 2021, www.vicon.com.

[4] Xsens. Home - Xsens 3D Motion Tracking. Xsen, www.xsens.com.

[5] Walker, Cordie. “K-VEST 6Dof Electro Magnetic System. Golf Science Lab, 21 Apr. 2017, golfsciencelab.com/3-types-motion-capture-systems-use/kvest-6d.

[6] Gypsy 7 Electro-Mechanical Motion Capture System. Gypsy 7, metamotion.com/gypsy/gypsy-motion-capture-system.html.

[7] Kinect - Windows App Development. Microsoft Kinect, developer.microsoft.com/en-us/windows/kinect.

[8] Intel® RealSenseTM Technology. Intel,

www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html.

[9] Cao, Zhe, et al. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, p.

1 (2019)

[10] Fang, Hao-Shu et al. RMPE: Reginal Multi-person Pose Estimation. IEEE International Conference on Computer Vision. pp. 2353-2362, (2017)

[11] Marco, Romeo et al. A Reusabel Model for Emtional Biped Walk-Cycle Animation with Implicit Retargeting. Articulated Motion and Deformable Objects, vol 6169, (2010)

[12] Lowe, David G. Distinctive Image Features from Scale-Invariant Keypoints.

International Joranl of Computer Vision. Vol.60 pp.91-110. (2004)

[13] Lee, Y.J., Zitnick, L. and Cohen, M. ShadowDraw: Real-Time User Guidance for Freehand Drawing, ACM Transactions on Graphics, vol.30, no.4 (2011)

[14] Igarashi, T., Matsuoka, S., and Tanaka, H. Teddy: A Sketching Interface for 3D Freeform Design, SIGGRAPH '99: Proceedings of the 26th annual conference on

39

Computer graphics and interactive techniques, pp.409-416 (1999)

[15] Eitz, M. et al. Sketch-Based Shape Retrieval, ACM Transactions on Graphics, vol.31, no.4 (2012)

[16] Zou, C. et al. Sketch-based Shape Retrieval using Pyramid-of-Parts, arXiv preprint arXiv:1502.04232. (2015)

[17] Xing, J. et al. Energy-Brushes: Interactive Tools for Illustrating Stylized

Elemental Dynamics, UIST '16: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp.755-766 (2016)

[18] Hu, Z.Y. et al. Sketch2VF: Sketch-Based Flow Design with Conditional Generative Adversarial Network, Computer Animation and Virtual Worlds, vol.30, no.3-4 (2019)

[19] Choi, B. et al. SketchiMo: Sketch-based Motion Editing for Articulated Characters, ACM Transactions on Graphics, vol.35, no,4 (2016)

[20] Choi, M. G., et al. Retrieval and Visualization of Human Motion Data via Stick Figures. Computer Graphics Forum, vol. 31, no. 7, 2012, pp. 2057–65. (2012) [21] Choi, M.G. et al. Dynamic Comics for Hierarchical Abstraction of 3D

Animation Data, Computer Graphics Forum, vol.32, no.7 (2013) [22] Turmukhambetov, D. et al. Interactive Sketch-Driven Image Synthesis,

COMPUTER GRAPHICS forum, vol.34, no.8, pp.130-142 (2015)

[23] Z. He, H. Xie and K. Miyata,Interactive Projection System for Calligraphy Practice," 2020 Nicograph International (NicoInt), pp. 55-61 (2020)

[24] Yasuda, H. et al. Motion Belts: Visualization of Human Motion Data on a Timeline, IEICE Transactions on Information and Systems, vol.E91-D, no.4, pp.1159-1167 (2008)

[25] Assa, J., Caspi, Y. and Cohen-or, D. Action Synopsis: Pose Selection and Illustration, ACM Transactions on Graphics, vol.24, no.3, pp.667-676 (2005) [26] Bouvier-Zappa, S., Ostromoukhow, V. and Poulin, P. Motion Cues for Illustration

of Skeletal Motion Capture Data, NPAR '07: Proceedings of the 5th international symposium on Non-photorealistic animation and rendering, pp.133-140 (2007) [27] Zamora, S. and Sherwood, T. Sketch-Based Recognition System for General

Articulated Skeletal Figures, SBIM '10: Proceedings of the Seventh Sketch-Based Interfaces and Modeling Symposium, pp.119-126 (2010)

[28] Gleicher, M.. Retargetting motion to new characters. Proceedings of the 25th annual conference on Computer graphics and interactive techniques. pp. 33-42.

(1998)

[29] Kfir, Aberman et al. Skeleton-Aware Networks for Deep Motion Retargeting, ACM

関連したドキュメント