Investigating unimodal isolated signer-independent sign language recognition
- Authors: Marais, Marc Jason
- Date: 2024-04-04
- Subjects: Convolutional neural network , Sign language recognition , Human activity recognition , Pattern recognition systems , Neural networks (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435343 , vital:73149
- Description: Sign language serves as the mode of communication for the Deaf and Hard of Hearing community, embodying a rich linguistic and cultural heritage. Recent Sign Language Recognition (SLR) system developments aim to facilitate seamless communication between the Deaf community and the broader society. However, most existing systems are limited by signer-dependent models, hindering their adaptability to diverse signing styles and signers, thus impeding their practical implementation in real-world scenarios. This research explores various unimodal approaches, both pose-based and vision-based, for isolated signer-independent SLR using RGB video input on the LSA64 and AUTSL datasets. The unimodal RGB-only input strategy provides a realistic SLR setting where alternative data sources are either unavailable or necessitate specialised equipment. Through systematic testing scenarios, isolated signer-independent SLR experiments are conducted on both datasets, primarily focusing on AUTSL – a signer-independent dataset. The vision-based R(2+1)D-18 model emerged as the top performer, achieving 90.64% accuracy on the unseen AUTSL dataset test split, closely followed by the pose-based Spatio- Temporal Graph Convolutional Network (ST-GCN) model with an accuracy of 89.95%. Furthermore, these models achieved comparable accuracies at a significantly lower computational demand. Notably, the pose-based approach demonstrates robust generalisation to substantial background and signer variation. Moreover, the pose-based approach demands significantly less computational power and training time than vision-based approaches. The proposed unimodal pose-based and vision-based systems were concluded to both be effective at classifying sign classes in the LSA64 and AUTSL datasets. , Thesis (MSc) -- Faculty of Science, Ichthyology and Fisheries Science, 2024
- Full Text:
- Date Issued: 2024-04-04
- Authors: Marais, Marc Jason
- Date: 2024-04-04
- Subjects: Convolutional neural network , Sign language recognition , Human activity recognition , Pattern recognition systems , Neural networks (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435343 , vital:73149
- Description: Sign language serves as the mode of communication for the Deaf and Hard of Hearing community, embodying a rich linguistic and cultural heritage. Recent Sign Language Recognition (SLR) system developments aim to facilitate seamless communication between the Deaf community and the broader society. However, most existing systems are limited by signer-dependent models, hindering their adaptability to diverse signing styles and signers, thus impeding their practical implementation in real-world scenarios. This research explores various unimodal approaches, both pose-based and vision-based, for isolated signer-independent SLR using RGB video input on the LSA64 and AUTSL datasets. The unimodal RGB-only input strategy provides a realistic SLR setting where alternative data sources are either unavailable or necessitate specialised equipment. Through systematic testing scenarios, isolated signer-independent SLR experiments are conducted on both datasets, primarily focusing on AUTSL – a signer-independent dataset. The vision-based R(2+1)D-18 model emerged as the top performer, achieving 90.64% accuracy on the unseen AUTSL dataset test split, closely followed by the pose-based Spatio- Temporal Graph Convolutional Network (ST-GCN) model with an accuracy of 89.95%. Furthermore, these models achieved comparable accuracies at a significantly lower computational demand. Notably, the pose-based approach demonstrates robust generalisation to substantial background and signer variation. Moreover, the pose-based approach demands significantly less computational power and training time than vision-based approaches. The proposed unimodal pose-based and vision-based systems were concluded to both be effective at classifying sign classes in the LSA64 and AUTSL datasets. , Thesis (MSc) -- Faculty of Science, Ichthyology and Fisheries Science, 2024
- Full Text:
- Date Issued: 2024-04-04
Selected medicinal plants leaves identification: a computer vision approach
- Authors: Deyi, Avuya
- Date: 2023-10-13
- Subjects: Deep learning (Machine learning) , Machine learning , Convolutional neural network , Computer vision in medicine , Medicinal plants
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/424552 , vital:72163
- Description: Identifying and classifying medicinal plants are valuable and essential skills during drug manufacturing because several active pharmaceutical ingredients (API) are sourced from medicinal plants. For many years, identifying and classifying medicinal plants have been exclusively done by experts in the domain, such as botanists, and herbarium curators. Recently, powerful computer vision technologies, using machine learning and deep convolutional neural networks, have been developed for classifying or identifying objects on images. A convolutional neural network is a deep learning architecture that outperforms previous advanced approaches in image classification and object detection based on its efficient features extraction on images. In this thesis, we investigate different convolutional neural networks and machine learning algorithms for identifying and classifying leaves of three species of the genus Brachylaena. The three species considered are Brachylaena discolor, Brachylaena ilicifolia and Brachylaena elliptica. All three species are used medicinally by people in South Africa to treat diseases like diabetes. From 1259 labelled images of those plants species (at least 400 for each species) split into training, evaluation and test sets, we trained and evaluated different deep convolutional neural networks and machine learning models. The VGG model achieved the best results with 98.26% accuracy from cross-validation. , Thesis (MSc) -- Faculty of Science, Mathematics, 2023
- Full Text:
- Date Issued: 2023-10-13
- Authors: Deyi, Avuya
- Date: 2023-10-13
- Subjects: Deep learning (Machine learning) , Machine learning , Convolutional neural network , Computer vision in medicine , Medicinal plants
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/424552 , vital:72163
- Description: Identifying and classifying medicinal plants are valuable and essential skills during drug manufacturing because several active pharmaceutical ingredients (API) are sourced from medicinal plants. For many years, identifying and classifying medicinal plants have been exclusively done by experts in the domain, such as botanists, and herbarium curators. Recently, powerful computer vision technologies, using machine learning and deep convolutional neural networks, have been developed for classifying or identifying objects on images. A convolutional neural network is a deep learning architecture that outperforms previous advanced approaches in image classification and object detection based on its efficient features extraction on images. In this thesis, we investigate different convolutional neural networks and machine learning algorithms for identifying and classifying leaves of three species of the genus Brachylaena. The three species considered are Brachylaena discolor, Brachylaena ilicifolia and Brachylaena elliptica. All three species are used medicinally by people in South Africa to treat diseases like diabetes. From 1259 labelled images of those plants species (at least 400 for each species) split into training, evaluation and test sets, we trained and evaluated different deep convolutional neural networks and machine learning models. The VGG model achieved the best results with 98.26% accuracy from cross-validation. , Thesis (MSc) -- Faculty of Science, Mathematics, 2023
- Full Text:
- Date Issued: 2023-10-13
- «
- ‹
- 1
- ›
- »