Article Preview
TopIntroduction
Recent advances in 3D scanning and modeling technologies made it easier to achieve and create 3D models. Large collections of 3D models are becoming available on the Internet, which are widely used in various areas such as computer aided design (CAD), 3D animation and movies, virtual reality, medicine, archeology and so on. As the numbers of different kinds of 3D models continue to grow, there has been an increasing interest in helping people search for a 3D model through large model databases. So far, according to the types of input queries for users, the existing 3D model retrieval algorithms can be classified into three categories: text-based, 3D model example-based and 2D sketch-based (Tangelder & Veltkamp, 2008). The most common way for retrieving 3D models is text-based, which has already been adopted in some large online commercial 3D model search engines, i.e. Google Warehouse. However, text-based retrieval cannot work well in many cases due to textual information are not enough to fully describe what the model actually is. 3D model example-based retrieval requires users to input an example model as a query, and a set of 3D models can be retrieved from the database based on the comparison of the geometric and topological properties with query model. The performance of example-based retrieval is better than text-based methods, but it is not practical and user-friendly because users usually don’t have a suitable example model. Recently, as the emergence of portable touch-screen devices such as smart phones and tablets, sketch-based user interface is commonly used for human to interact with machines. Therefore, the most intuitive and user-friendly way of 3D model retrieval is to use the 2D hand-written sketch as a query. Users can describe their desired 3D models by quickly drawing a 2D sketch which greatly simplifying the retrieving process. For these reasons, sketch-based 3D model retrieval method has a greater practical value than text-based and example-based.
Although sketch is a good way to express people’s search intention, there is a large gap in the appearance of hand-drawn sketches and real 3D models. Sketch is highly iconic and simplified which is made up of an ordered set of strokes and lines. Moreover, the same model can be drawn in varied levels of abstraction and deformation due to different people has different sketching styles. Thus, it is difficult to directly compare a sketch and the corresponding 3D model. Several approaches have been proposed to address these problems (Chen et al., 2010; Eitz & Alexa, 2010; Funkhouser et al., 2002; Furuya et al., 2013; Li et al., 2017; Saavedra et al., 2011; Wang, Kang, & Li, 2015; Zhu, Xie, & Fang, 2016). However, most existing sketch-based 3D model retrieval algorithms only consider using traditional feature descriptors from image retrieval paradigm to recognize human sketches, such as geometric moment (Funkhouser et al., 2002), lighting field (Chen et al., 2010), HOG (Dalal et al., 2005), SIFT and so on. But sketch differs from normal image which lacks visual cues such as color and texture. Therefore, these existing algorithms cannot achieve semantic understanding of the query sketch and capture the real search intension of users.