A Virtual Journey through 2D and 3D Elaborations Recorded with Range-Based and Image-Based Method: The Experience of Vitelleschi Palace in Tarquinia

A Virtual Journey through 2D and 3D Elaborations Recorded with Range-Based and Image-Based Method: The Experience of Vitelleschi Palace in Tarquinia

Mariella La Mantia
DOI: 10.4018/978-1-4666-8379-2.ch021
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The aim of this study is to define a virtual journey into the vast repertoire of images and representations realized as a result of the measuring operations of the Vitelleschi Palace. The chapter will provide detailed informations about the history of the Palace, together with a deep investigation of its building phases, conducted through both the analysis of historical graphic documentation and the performing of a new architectural survey. The field of architectural survey, in recent decades, has undergone profound transformations made possible by the introduction and establishment of new instruments that have sped up the acquisition times and increased the amount of data collected with a high automation of measuring operations. The centuries-old building, product of many stratifications that occurred over time, are the examples that best of all offer themselves to these procedures of investigation. In this sense, the Vitelleschi Palace, authentic architectural masterpiece whose facades witness the period of transition between different architectural addresses, is a landmark case.
Chapter Preview
Top

Background

Due to the increasing use of the new digital integrated techniques of survey, quantity and speed have therefore become the parameters of comparison and qualitative assessment, in terms of reliability and accuracy, as well as of discernment in the use of the most appropriate application technologies of detection, according to a requirement of possession that is typical of the current society which is that to have more data in less time, using the time saved to obtain other data in increasing amounts, in a process without continuity solutions. The ease of data acquisition with these instruments, which does not require special operating tricks or specific experience in the field by the operator, in the common sense leads to neglect the possible complications that executing operations in an uncritical mode, or otherwise not filtered by a significant experience, may imply, starting from the quite frequent collection of a huge amount of redundant data so as to make it extremely complex and burdensome to be managed and archived.

To this must be added, and we'll talk about it later, the basic misunderstanding for which the measurement produced is often equated with knowledge and critical interpretation of the investigated episode. And it’s no doubt that the quality and reliability of the results of a survey do not depend on the numerical amount of measuring data acquired, but on the critical and selected picking of data from this set, which is strictly functional to the objective of the survey, to the availability of human and instrumental available resources, as well as to the geometrical and spatial characteristics of the investigated episode. The now widespread use of these technologies makes it possible to respond effectively to difficult to manage situations that require to operate quickly and on episodes sometimes located in areas not physically accessible.

Besides the already mentioned positive aspects, related to the speed of acquisition and to operational autonomy, it is in fact necessary to consider, for example, the high reach of the instrumentation, which allows the measurement of elements even at great distances, the ability of acquisition of the metric values together with those chromatic2 of the scanned surfaces, the handling of instruments, with the consequent ease of transportation and operation, as well as the considerable opportunities offered by the presence of numerous data capture and data management software.

Key Terms in this Chapter

Range Based: The term refers to those techniques of data acquisition by means of a laser scanner.

Pre-Processing: The pre-processing phase is made of three main operations: recording, elaboration and decimation. The recording consists of the union of a couple or more point clouds; elaboration is made up by a set of operations: geo-referencing of the cloud on topo points (it consists in the roto-translation in space of each cloud in the global system), filtering (elimination of the points that have a high probability of not belonging to the object surface), selection (of the only points of the cloud covering the area of interest without any spatial resampling of these) and display of the scanworld as raw data; the decimation is the operation of reduction (resampling) of the spatial density of the points.

Data Capture: Refers to the methods of automatically identifying objects, collecting data about them, and entering that data directly into computer systems. It’s the process or means of obtaining external data, particularly through analysis of images, sounds or videos. To capture data, a transducer is employed which converts, in the case of architectural survey, the actual image or the coordinates of points in space into a digital file. The file is then stored and at a later time it can be analyzed by a computer, or compared with other files in a database.

Image Based: The term refers to those techniques of multi-image processing based on digital photogrammetric shooting.

Digital Photogrammetric Shooting with Multi-Image Elaboration Technique: An innovative technology of three-dimensional survey which allows to obtain a three-dimensional scan from photographs without the use of laser scanners as the acquisition of the point cloud is in fact directly made from digital images. The final result of these technique is a 3D model obtained from the simultaneous use of three or more images of the studied object.

Survey: In the field of architecture, the term refers to the set of operations, measurements and analysis that are carried out in order to first understand and then document an architectural episode in its totality, that is its measurable dimensions, historical complexity, structural, constructive, formal and functional characteristics.

Registration: It’s a calculation process that uses the coordinates of target points to determine the rotation and translation of each scanworld with respect to one another, but no scale factor is determined or applied.

Scanworld: Cyclone software defines a scanworld, as a collection of scans that are all aligned to a common coordinate system through a calculation process called registration.

Post-Processing: The phase following the pre-processing, in which the point clouds can be further processed for 2D or 3D elaborations (mesh realization).

Point cloud: A point cloud is a set of data points in some coordinate system. In a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates, and often are intended to represent the external surface of an object. Point clouds may be created by 3D scanners or multi-image processing based on digital photogrammetric shooting.

MeSH: A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling. The faces usually consist of triangles (triangle mesh), quadrilaterals, or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes.

Laser Scanner: A laser scanner is a device that analyses a real-world object or environment to collect data on its shape and possibly its appearance (e.g. color). The collected data can then be used to construct digital three-dimensional models.

Reverse Modeling: Is the activity that allows you to generate a mathematical model from the scan of the actual item. The first step in the activity is the partial or total survey of the object to be reconstructed: this step acquires particular efficiency through the use of optical techniques for 3D contactless digitization. The second step consists in the elaboration of data obtained through operations such as reduction, filling of the holes, smoothing and so on, up to obtain a suitably “clean” polygon file (e.g. STL format).

Photo Modeling: Photo modeling is a technique which permits to calculate 3D metric models by means of 2D photographs using the principles of digital photogrammetry. The control points chosen directly on the images are projected in a virtual space by the calculation of the optical lines coming from every picture. The models created by these techniques are scaled and texturized and can be exported in different formats. The advantages of this technique are many, first of all the speed of acquirement and processing of the object represented into the pictures, but is necessary to have a high number of pictures to completely document the object otherwise the software can't calculate a 3d metric model.

Complete Chapter List

Search this Book:
Reset