The redesigned StomataHub pages are still in active development. Mobile viewing is not fully optimised yet. For the stable version visit on a computer..
Reset
59 results
2019

Automatic segmentation and measurement methods of living stomata of plants based on the CV model

Li, Kexin; Huang, Jianping; Song, Wenlong; Wang, Jingtao; Lv, Shuai; Wang, Xiuwei
Species -
Network -
Annotation -
Outputs -
Keywords Black poplar, CV model, Image processing, Living stomata, Pore measurement, Stomata segmentation
Abstract
Background: The stomata of plants mainly regulate gas exchange and water dispersion between the interior and external environments of plants and play a major role in the plants' health. The existing methods of stomata segmentation and measurement are mostly for specialized plants. The purpose of this research is to develop a generic method for the fully automated segmentation and measurement of the living stomata of different plants. The proposed method utilizes level set theory and image processing technology and can outperform the existing stomata segmentation and measurement methods based on threshold and skeleton in terms of its versatility. Results: The single stomata images of different plants were the input of the method and a level set based on the Chan-Vese model was used for stomatal segmentation. This allowed the morphological features of the stomata to be measured. Contrary to existing methods, the proposed segmentation method does not need any prior information about the stomata and is independent of the plant types. The segmentation results of 692 living stomata of black poplars show that the average measurement accuracies of the major and minor axes, area, eccentricity and opening degree are 95.68%, 95.53%, 93.04%, 99.46% and 94.32%, respectively. A segmentation test on dayflower (Commelina benghalensis) stomata data available in the literature was completed. The results show that the proposed method can effectively segment the stomata images (181 stomata) of dayflowers using bright-field microscopy. The fitted slope of the manually and automatically measured aperture is 0.993, and the R2 value is 0.9828, which slightly outperforms the segmentation results that are given in the literature. Conclusions: The proposed automated segmentation and measurement method for living stomata is superior to the existing methods based on the threshold and skeletonization in terms of versatility. The method does not need any prior information about the stomata. It is an unconstrained segmentation method, which can accurately segment and measure the stomata for different types of plants (woody or herbs). The method can automatically discriminate whether the pore region is independent or not and perform pore region extraction. In addition, the segmentation accuracy of the method is positively correlated with the stomata's opening degree.
2019

Deep convolutional neural networks based framework for estimation of stomata density and structure from microscopic images

Bhugra, Swati; Mishra, Deepak; Anupama, Anupama; Chaudhury, Santanu; Lall, Brejesh; Chugh, Archana; Chinnusamy, Viswanathan
Species Various species
Network Deep CNN
Annotation Bounding box
Outputs High-throughput phenotyping
Keywords -
Abstract
Analysis of stomata density and its configuration based on scanning electron microscopic (SEM) images is an effective way to characterize plant behavior... (truncated for brevity)
2019

Genetic Diversity in Stomatal Density among Soybeans Elucidated Using High-throughput Technique Based on an Algorithm for Object Detection

Sakoda, Kazuma; Watanabe, Tomoya; Sukemura, Shun; Kobayashi, Shunzo; Nagasaki, Yuichi; Tanaka, Yu; Shiraiwa, Tatsuhiko
Species -
Network -
Annotation -
Outputs -
Keywords Natural variation in plants, Photosynthesis, Stomata
Abstract
The stomatal density (SD) can be a promising target to improve the leaf photosynthesis in soybeans (Glycine max (L.) Merr). In a conventional SD evaluation, the counting process of the stomata during a manual operation can be time-consuming. We aimed to develop a high-throughput technique for evaluating the SD and elucidating the variation in the SD among various soybean accessions. The central leaflet of the first trifoliolate was sampled, and microscopic images of the leaflet replica were obtained among 90 soybean accessions. The Single Shot MultiBox Detector, an algorithm for an object detection based on deep learning, was introduced to develop an automatic detector of the stomata in the image. The developed detector successfully recognized the stomata in the microscopic image with high-throughput. Using this technique, the value of R2 reached 0.90 when the manually and automatically measured SDs were compared in the 150 images. This technique discovered a variation in SD from 93 ± 3 to 166 ± 4 mm−2 among the 90 accessions. Our detector can be a powerful tool for a SD evaluation with a large-scale population in crop species, accelerating the identification of useful alleles related to the SD in future breeding programs.
2019

StomataCounter: a neural network for automatic stomata identification and counting

Fetter, Karl C.; Eberhardt, Sven; Barclay, Rich S.; Wing, Scott; Keller, Stephen R.
Species -
Network -
Annotation -
Outputs -
Keywords computer vision, convolutional deep learning, neural network, phenotyping, stomata
Abstract
Stomata regulate important physiological processes in plants and are often phenotyped by researchers in diverse fields of plant biology. Currently, there are no user-friendly, fully automated methods to perform the task of identifying and counting stomata, and stomata density is generally estimated by manually counting stomata. We introduce StomataCounter, an automated stomata counting system using a deep convolutional neural network to identify stomata in a variety of different microscopic images. We use a human-in-the-loop approach to train and refine a neural network on a taxonomically diverse collection of microscopic images. Our network achieves 98.1% identification accuracy on Ginkgo scanning electron microscropy micrographs, and 94.2% transfer accuracy when tested on untrained species. To facilitate adoption of the method, we provide the method in a publicly available website at http://www.stomata.science/.
2018

Automatic Quantification of Stomata for High-Throughput Plant Phenotyping

Bhugra, Swati; Mishra, Deepak; Anupama, Anupama; Chaudhury, Santanu; Lall, Brejesh; Chugh, Archana
Species -
Network -
Annotation -
Outputs -
Keywords -
Abstract
Stomatal morphology is a key phenotypic trait for plants' response analysis under various environmental stresses (e.g. Drought, salinity etc.). Stomata exhibit diverse characteristics with respect to orientation, size, shape and varying degree of papillae occlusion. Thus, the biologists currently rely on manual or semi-automatic approaches to accurately compute its morphological traits based on scanning electron microscopic (SEM) images of leaf surface. In contrast to these subjective and low-throughput methods, we propose a novel automated framework for stomata quantification. It is realized based on a hybrid approach where the candidate stomata region is first detected by a convolutional neural network (CNN) and the occlusion is dealt with an inpainting algorithm. In addition, we propose stomata segmentation based quantification framework to solve the problem of shape, scale and occlusion in an end-to-end manner. The performance of the proposed automated frameworks is evaluated by comparing the derived traits with manually computed morphological traits of stomata. With no prior information about its size and location, the hybrid and end-to-end machine learning frameworks shows a correlation of 0.94 and 0.93, respectively on rice stomata images. Furthermore, they successfully enable wheat stomata quantification showing generalizability in terms of cultivars.
2018

DeepStomata: Facial Recognition Technology for Automated Stomatal Aperture Measurement

Toda, Yosuke; Toh, Shigeo; Bourdais, Gildas; Robatzek, Silke; Maclean, Dan; Kinoshita, Toshinori
Species Various species
Network CNN
Annotation Semantic segmentation
Outputs DeepStomata software
Keywords -
Abstract
Stomata are an attractive model for studying physiological responses of plants to various environmental stimuli... (truncated for brevity)
2018

StomataCounter: a deep learning method applied to automatic stomatal identification and counting

Fetter, Karl C.; Eberhardt, Sven; Barclay, Rich S.; Wing, Scott; Keller, Stephen R.
Species Various species
Network Deep Convolutional Neural Networks
Annotation Not specified
Outputs StomataCounter software tool
Keywords -
Abstract
StomataCounter is a deep learning method applied to automatic stomatal identification and counting, enhancing the efficiency and accuracy of stomatal analysis.
2017

Microscope image based fully automated stomata detection and pore measurement method for grapevines

Jayakody, Hiranya; Liu, Scarlett; Whitty, Mark; Petrie, Paul
Species -
Network -
Annotation -
Outputs -
Keywords Automatic stomata detection, Cascade object detection, Grapevines, Image processing, Machine learning, Skeletonization, Stomata, Stomatal morphology
Abstract
Background: Stomatal behavior in grapevines has been identified as a good indicator of the water stress level and overall health of the plant. Microscope images are often used to analyze stomatal behavior in plants. However, most of the current approaches involve manual measurement of stomatal features. The main aim of this research is to develop a fully automated stomata detection and pore measurement method for grapevines, taking microscope images as the input. The proposed approach, which employs machine learning and image processing techniques, can outperform available manual and semi-automatic methods used to identify and estimate stomatal morphological features. Results: First, a cascade object detection learning algorithm is developed to correctly identify multiple stomata in a large microscopic image. Once the regions of interest which contain stomata are identified and extracted, a combination of image processing techniques are applied to estimate the pore dimensions of the stomata. The stomata detection approach was compared with an existing fully automated template matching technique and a semi-automatic maximum stable extremal regions approach, with the proposed method clearly surpassing the performance of the existing techniques with a precision of 91.68% and an F1-score of 0.85. Next, the morphological features of the detected stomata were measured. Contrary to existing approaches, the proposed image segmentation and skeletonization method allows us to estimate the pore dimensions even in cases where the stomatal pore boundary is only partially visible in the microscope image. A test conducted using 1267 images of stomata showed that the segmentation and skeletonization approach was able to correctly identify the stoma opening 86.27% of the time. Further comparisons made with manually traced stoma openings indicated that the proposed method is able to estimate stomata morphological features with accuracies of 89.03% for area, 94.06% for major axis length, 93.31% for minor axis length and 99.43% for eccentricity. Conclusions: The proposed fully automated solution for stomata detection and measurement is able to produce results far superior to existing automatic and semi-automatic methods. This method not only produces a low number of false positives in the stomata detection stage, it can also accurately estimate the pore dimensions of partially incomplete stomata images. In addition, it can process thousands of stomata in minutes, eliminating the need for researchers to manually measure stomata, thereby accelerating the process of analysing plant health.
0

Stomatal Feature Extraction of Lettuce Leaves Using Improved U-Net Network

Zhang, Xihai; Zhang, Ruwen; Cheng, Jin; Gong, Xinjing; Guo, Ruichao; Wang, Hao; Chen, Zerui; Zhu, Jiaxi; Xia, Juheng
Species -
Network -
Annotation -
Outputs -
Keywords CBAM, Deep Learning, Feature extraction, Hao Wang, Jiaxi Zhu, Jin Cheng, Juheng Xia, Ruichao Guo, Ruwen Zhang, SSRN, Stomata, Stomatal Feature Extraction of Lettuce Leaves Using Improved U-Net Network, U-Net, Xihai Zhang, Xinjing Gong, Zerui Chen