Plus d’un million de livres, à portée de main !
Bookbot

Marcin Grzegorzek

    Appearance based statistical object recognition including color and context modeling
    Sensor data understanding
    Time-of-flight and depth imaging
    Artificial Intelligence in Microscopic Image Analysis
    • Time-of-flight and depth imaging

      • 329pages
      • 12 heures de lecture

      Cameras for 3D depth imaging, using either time-of-flight (ToF) or structured light sensors, have received a lot of attention recently and have been improved considerably over the last few years. The present techniques make full-range 3D data available at video frame rates, and thus pave the way for a much broader application of 3D vision systems. A series of workshops have closely followed the developments within ToF imaging over the years. Today, depth imaging workshops can be found at every major computer vision conference. The papers presented in this volume stem from a seminar on Time-of-Flight Imaging held at Schloss Dagstuhl in October 2012. They cover all aspects of ToF depth imaging, from sensors and basic foundations, to algorithms for low level processing, to important applications that exploit depth imaging. In addition, this book contains the proceedings of a workshop on Imaging New Modalities, which was held at the German Conference on Pattern Recognition in Saarbrücken, Germany, in September 2013. A state-of-the-art report on the Kinect sensor and its applications is followed by two reports on local and global ToF motion compensation and a novel depth capture system using a plenoptic multi-lens multi-focus camera sensor.

      Time-of-flight and depth imaging
    • The rapid development in the area of sensor technology has been responsible for a number of societal phenomena like UGC (User Generated Content) or QS (Quantified Self). Machine learning algorithms benefit a lot from the availability of such huge volumes of digital data. For example, new technical solutions for challenges caused by the demographic change (ageing society) can be proposed in this way, especially in the context of healthcare systems in industrialised countries. The goal of this book is to present selected algorithms for Visual Scene Analysis (VSA, processing UGC) as well as for Human Data Interpretation (HDI, using data produced within the QS movement) and to expose a joint methodological basis between these two scientific directions. While VSA approaches have reached impressive robustness towards human-like interpretation of visual sensor data, HDI methods are still of limited semantic abstraction power. Using selected state-of-the-art examples, this book shows the maturity of approaches towards closing the semantic gap in both areas, VSA and HDI.

      Sensor data understanding
    • This dissertation presents a system for appearance-based statistical classification and localization of 3-D objects in 2-D digital images. The initial chapters define the object recognition task, outline the mathematical foundations of the system, and review existing object recognition methods. The learning phase begins with image acquisition using a hand-held camera, where object poses for modeling are computed from the training image sequence via a structure-from-motion algorithm. Unlike shape-based methods, this approach avoids segmentation steps to extract object features. Instead, objects are represented by 2-D local feature vectors derived directly from image pixel values using the wavelet transform, applicable to both gray level and color images. These features are statistically modeled with normal distribution and stored as density functions in the object models, alongside context modeling during training. In the recognition phase, the system classifies and localizes objects within scenes featuring real heterogeneous backgrounds, where the number of objects is unknown. Feature vectors are computed similarly to the training phase, and a maximization algorithm evaluates the learned density functions against the extracted feature vectors, identifying classes and poses of objects in the scene. Experiments conducted on a dataset of over 40,000 images demonstrate the system's strong performance in classification and localiza

      Appearance based statistical object recognition including color and context modeling