1. Introduction
Autism spectrum disorder (ASD) is a common, highly heritable and heterogeneous neurodevelopmental disorder (Lord et al., 2020). Most individuals with ASD will not work full time or live independently, which causes a huge financial burden (Lord et al., 2018). ASD is primarily treated through education and behavioral services, with medication as an important adjunct. Early detection of autistic patients, especially children, helps to make the right treatment plan at the right time. ASD can be diagnosed by various professionals (paediatricians, psychiatrists, or psychologists), ideally with input from multiple disciplines (Lord et al., 2020). In many individuals, symptoms emerge during the second and third year of life, in others, onset might not be noticed until the child reaches school-age or later (Lord et al., 2018). Although it is possible to identify some children with autism before parents or professionals have identified concerns, diagnosis is missed in many children (Øien et al., 2018). Even though the majority of children with ASD in northern Europe and North America are diagnosed by early school age, there remain others who have never had a diagnosis (Lord et al., 2020). This will delay education and intervention time of these children. Therefore, before the professional examination of autism, screen autistic children simply and effectively is very helpful to reduce missed detection.
Facial features have been studied as the basis of the diagnosis of autism. Aldridge’s team found that: in the ASD group, (1) the linear distances connecting glabella and nasion to the inner canthi, and those connecting nasion with landmarks located on the nose and philtrum were significantly reduced; (2) the linear distances connecting the landmarks on the mouth with the inferior nasal region, and those connecting the inner and outer canthi and the lateral upper face with the eyes and contralateral side of the mouth were significantly increased (Aldridge et al., 2011). Significant facial asymmetry is in boys with ASD, notably depth-wise in the supra-and periorbital regions anterior to the frontal pole of the right hemisphere of the brain (Hammond et al., 2008). Other research teams found that: (1) Children with high functioning autism (HFA) lack richness and variability in their facial expression patterns (Guha et al., 2018); (2) Children with ASD more frequently displayed neutral expressions compared to children without ASD, who had more all other expressions, and the frequency of all other expressions were driven by non-ASD children more often displaying raised eyebrows and an open mouth, characteristic of engagement/interest (Carpenter et al., 2021). These studies provide evidence for the detection of autistic children through facial features and expressions.
It is difficult for humans to observe facial differences between children with autism and non-autistic children, so it is difficult for human experts to screen children with autism by observing facial photos of children. The method proposed in our research can quickly screen out suspected autistic children by using convolution neural network to recognize children’s facial images, and then experts can use scales or other methods to further diagnose these suspected autistic children. The advantage of doing so is that it can quickly screen children suspected of autism, avoid sending non-autistic children to a doctor for examination, reduce the burden of doctors and save medical resources, and reduce missed diagnosis in children with autism.
With the progress and popularization of mobile technology, it is convenient to use a mobile phone to photograph and analyze children’s faces, and then draw the conclusion whether the child is autistic. MobileNet series models are suitable for running on mobile devices. The AUC is calculated based on all possible cut-off values, which is more robust than accuracy, so it is more reasonable to use the AUC to measure the quality of model. In the existing research of using MobileNet series model to classify the images of autistic children from Kaggle, the AUC is still room for improvement. Moreover, the size of image in these studies is large, requiring larger memory capacity and faster processor speed, so they are not suitable for processing by a mobile phone. In order to overcome the above shortcomings, a face image classification algorithm for autistic children based on the two-phase transfer learning is proposed. The contributions of this research are as follows:
i. An algorithm framework of facial image classification for autistic children is proposed, which combines the two-phase transfer learning mode and the multi-classifier integration mode to improve the accuracy and the AUC of the classifiers.
ii. A calculation method is proposed to integrate the classifying results of MobileNetV2 and MobileNetv3-Large to get the final classifying result.
iii. The reasons why the two-phase transfer learning and multi-classifier integration can improve the classification performance of the model are given.
The rest of this paper is organized as follows. Section 2 describes the materials and methods used in this research, introducing the dataset of children’s facial expression, two deep learning models including MobileNetV2 and MobileNetV3-Large, two transfer learning modes including the two-phase transfer learning mode and the multi-classifier integration mode, and proposing a deep transfer learning method and its implementation framework. Section 3 describes the results of experiments in this research, which include the two-phase transfer learning experiment and the multi-classifier integration experiment. Section 4 discusses the results of the experiments.
2. Related works
The diagnostic methods of autism can be classified into two groups. One group can be summarized as observational, subjective and sometimes qualitative, like questionnaires, observation scales, interviews and developmental tests. The other group can be summarized as direct, objective and mostly quantitative, which consists of methods that are either technology based, measure basic cognitive, and neurological processes and structures (Bölte et al., 2016). In recent years, machine learning algorithms have been more and more widely used in the biomedical field, especially their application has attracted more attention in the field of psychiatry, for example, as a diagnostic tool for ASD (Moon et al., 2019). Machine learning algorithms belongs to the second group of methods. ASD can be diagnosed using machine learning by analyzing genes (Gunning and Pavlidis, 2021; Lin et al., 2021), brain (Yahata et al., 2016; Eslami et al., 2019; Payabvash et al., 2019; Conti et al., 2020; Jiao et al., 2020; Doi et al., 2021; ElNakieb et al., 2021; Garbulowski et al., 2021; Gui et al., 2021; Leming et al., 2021; Liu et al., 2021; Nunes et al., 2021; Shi et al., 2021; Takahashi et al., 2021; Ali et al., 2022; Alves et al., 2023; ElNakieb et al., 2023; Martinez and Chen, 2023), retina (Lai et al., 2020), eye activity (Vabalas et al., 2020; Cilia et al., 2021; Liu et al., 2021; Kanhirakadavath and Chandran, 2022), facial activity (Carpenter et al., 2021), human behavior (Tariq et al., 2018; Drimalla et al., 2020) or movement (Alcañiz Raya et al., 2020). Quiet a few researchers have reviewed the application of machine learning in autism detection. Moon et al. systematically reviewed, analyzed and summarized the available evidence of the accuracy of machine learning algorithm in diagnosing autism (Moon et al., 2019). Jacob et al. (2019) reviewed machine learning methods to simplify the diagnosis method of ASD, distinguish the similarities and differences with comorbidity diagnosis, track the results of development and change, and discuss the supervised machine learning model of classification results and the unsupervised method of identifying new dimensions and subgroups. Siddiqui et al. (2021) summarized the recent research and technology of using machine learning based strategies to screen ASD in infants and children under 18 months old, and found out the gap that can be solved in the future, and suggested that the application of machine learning and artificial intelligence in infant autism screening is still in its infancy.
In recent years, deep learning is a new research direction in the field of machine learning, and Convolutional Neural Network (CNN) technology of deep learning has developed rapidly. Its advantage is to avoid the complex feature extraction and parameter setting of traditional methods. The difficulty of medical image can be solved by modifying the basic structure of CNN without designing structure of model from scratch. Deep learning requires a large number of samples, while transfer learning requires a small number of samples. It can transfer the model trained on a large number of samples to those tasks with a small number of samples, and obtain an accurate model by retraining to fine tune the model’s parameters. Therefore, transfer learning has become a new research hot spot of machine learning after deep learning. Deep learning and transfer learning has been applied to the detection of autism through facial features. Akter et al. obtained the facial image dataset of autistic children from Kaggle, and used several machine learning classifiers as baseline classifiers, and six pre-trained CNN. Experimental data show that among all baseline classifiers, MobileNetV1 has the highest accuracy and AUC, both are 90.67%. The accuracy and the AUC of MobileNetV2 are both 64.67%, much lower than that of MobileNetV1 (Akter et al., 2021). Lu and Perkowski (2021) proposed a practical solution for ASD screening using facial images, applying VGG16 to the collected unique ASD dataset of clinically diagnosed ASD children. VGG16 produced a classification accuracy of 95% and an F1 score of 0.95. In the dataset used in this research, the facial images of autistic children were from an autistic rehabilitation center in China, and the facial images of non-autistic children were from several kindergartens and primary schools in China. Hosseini et al. introduced a deep learning model, that is MobileNet, which uses deep learning to classify children into healthy children or potential autistic children according to their facial images, with an accuracy of 94.6% (Hosseini et al., 2022), but the AUC of the model is not given. The dataset used in this research is also from the Kaggle. Mujeeb Rahman and Subashini (2022) use Five pre-trained CNN models as feature extractors respectively, and use a Deep Neural Network (DNN) model as a classifier to accurately identify children with autism, and the experimental results show that Xception is superior to other models, with the accuracy of 90%, the AUC of 96.63%. A public dataset from Kaggle is used to train the models. Alsaade and Alzahrani (2022) used a simple network application based on a deep learning system, namely a CNN with a transfer learning and flask framework to detect ASD children. Xception, VGG19 and NASNETMobile are pre-trained models for classification tasks. Xception model has the highest accuracy rate of 91%. The research did not provide the AUC for each model. The research also uses public children’s facial images from Kaggle.
3. Materials and methods
A deep transfer learning method is proposed to detect children’s autism. This method uses the two-phase transfer learning mode and the multi-classifier integration mode to improve performance. This method classifies children’s expression images into autism and normal.
3.1. Dataset
The dataset used in this research is from Kaggle and provided by Piosenka (2021), in which most images are downloaded from autism related websites and Facebook pages. The dataset consist facial images of autistic children and non-autistic children. The images in this dataset are mainly those of children in Europe and the United States, while those of children in other regions are less. The images of autistic children include boys and girls, and the images of non-autistic children also include boys and girls, and the number of images of non-autistic children is equal to that of autistic children. The size of these images in the dataset are different, so they are needed to be converted to a unified specification when training an model.
In this research, the task of the intermediate domain of the two-phase transfer learning is facial expression recognition of normal people, and the size of the image used in this task is 44 × 44. In order to use the model’s parameters trained on the intermediate domain, in the task of identifying autistic children in the target domain, the children’s facial expression image is compressed to 44×44. There are 2,940 images in the children’s expression dataset. It was divided into two parts: the training set and the test set. The training set had 2,340 images, accounting for 79.5% of the whole dataset, in which the number of normal images and autistic images accounts for 50% (1,170 images) respectively. The test set had 600 images, accounting for 20.5% of the whole dataset, in which the number of normal images and autistic images account for 50% (300 images) respectively.
3.2. The deep learning models
MobileNet series includes MobileNetV1, MobileNetV2 and MobileNetV3. MobileNetV2 is an improvement on MobileNetV1, and also uses the depth-wise separate revolution. This convolution can not only reduce the computational complexity of a model, but also greatly reduce the size of a model. The improvements of MobileNetV2 to MobileNetV1are all aimed at obtaining more features. The improvements are as follows: (1) Add an 1*1 expansion convolution layer for increasing dimension before the depth-wise revolution, in order to obtain more features by increasing the number of channels; (2) Add an 1*1 projection convolution layer to reduce the dimension after the depth wise convolution, which can reduce the dimension to reduce the amount of calculation; (3) Use linear activation function instead of Relu function to prevent features loss (Sandler et al., 2018). These changes form the structure of inverted residual bottleneck. Therefore, although the structure of MobilenetV2 is more complex than MobilenetV1, it is more efficient to obtain features.
MobilenetV3 is an improvement on MobilenetV2, which aims to improve the calculation speed without losing features. The improvements are as follows: (1) Introduce squeeze-and-exception into the bottleneck residual block of MobilenetV2, so that the model can automatically learn the importance of different channel characteristics; (2) The last several layers of MobilenetV2 are improved by deleting the original linear bottleneck to simplify the model structure, and reduce the number of convolution cores in the first convolution layer, from 32 convolution cores to 16 to improve the calculation speed of the model; (3) Replace the Swish and Sigmoid activation functions with hard Sigmoid and hard Swish activation functions, which can simplify the calculation without reducing the accuracy. MobilenetV3 is defined as two models: MobileNetv3-Large and MobilenetV3-Small, respectively for high resource and low resource use cases (Howard et al., 2019). MobileNetv3-Large is used in this research.
3.3. The transfer learning mode
3.3.1. The two-phase transfer leaning mode
The application research of transfer learning can be grouped into the one-phase transfer learning mode and the two-phase transfer learning mode. The former has a source domain and a target domain, and only transfers knowledge once; while the latter has a source domain, an intermediate domain and a target domain, and transfers knowledge twice. The first transfer is from the source domain to the intermediate domain, and the second transfer is from the intermediate domain to the target source. The two-phase transfer learning mode is shown in the figure on the left of Figure 1. In this figure, the transfer learning strategy refers to how to adjust the model, including structural adjustment and parameter adjustment. Structural adjustment refers to modifying the structure of the model, such as deleting or adding some layers. Parameter adjustment refers to retraining the model by using the target domain data to adjust the model’s parameters including the parameters of the convolution layer and the FC layer (Li and Song, 2022). The two-phase transfer learning model includes three learning stages, which correspond to three domains, respectively. A model is trained in three domains in turn, and its training in each domain is based on the training results of the previous domain, so the knowledge learned by the model is continuously accumulated. In terms of the degree of knowledge accumulation, the model obtains more knowledge through the two-phase transfer learning than the one-phase transfer learning. Some medical image classification research based on the two-phase transfer learning are consistent with this conclusion, such as the research of Li et al. (2021), Chu et al. (2018), and Zhang et al. (2020). Chu et al. used the two-phase transfer learning to research the classification of breast cancer. The transferred model was VGG16. The source domain was the classification task on ImageNet, the intermediate domain was the classification task on the breast X-ray digital breast image database, and the target domain was the classification task on the breast MRI tumor dataset. The experimental results reveal that the two-phase transfer learning model is more effective than the one-phase transfer learning model (Chu et al., 2018). Zhang et al. studied covid-19 detection based on chest X-ray images. The transferred model was resnet34. The source domain was the classification task on ImageNet, the intermediate domain was the classification task of chest X-ray images (pneumonia and normal classification), and the target domain was the classification task of covid-19 image dataset (covid-19, other pneumonia and normal classification) (Zhang et al., 2020). The experimental data reveal it is necessary to improve the depth learning model based on X-ray image by using the two-phase transfer learning strategy (Zhang et al., 2020).
Figure 1. The transfer learning models. (A) The two-phase learning transfer and (B) the multi-classifier integration mode (Li and Song, 2022).
3.3.2. The multi-classifier integration mode
In the multi-classifier integration mode, for the same sample, different results are given by several classifiers, and then these results are integrated into a result through a certain method to be the final result. The advantage of the multi-classifier integration model is that it fully considers the classification results of every classifier, so it is more reliable than a single classifier. The multi-classifier integration mode is shown in the figure on the right of Figure 1. Some medical image classification research based on the multi-classifier integration mode are consistent with this conclusion, such as the research of skin diseases (Mahbod et al., 2020) and brain tumors (Hao et al., 2021) etc. Mahbod’s team proposed a skin damage classification method adopted three-level average. The first level is the 5-fold average, the second level is the average of six input images of different sizes, and the last level is the average of three classifiers (effientnetb0, effientnetb1, seesnext50). The final average probability comes from 90 (5 * 6 * 3) models (Mahbod et al., 2020). Hao’s team Studied the classification of brain tumors, fine-tuned Alexnet using three different learning rates in the labeled dataset, and obtained three CNNs. Firstly, the three prediction probabilities of each unlabeled sample are given by three CNNs, and then the individual entropy and paired KL divergence of every sample are calculated. The uncertainty score is the sum of the entropy and KL divergence of each sample; and the uncertainty scores are used to help label unlabeled samples (Hao et al., 2021).
The multi-classifier integration is a kind of ensemble learning. The advantage of ensemble learning is to obtain better performance than a single classifier by integrating multiple “good but different” classifiers. Specifically, to achieve good integration, individual classifiers should be “good but different”, that is, individual classifiers should not be too bad, and should be “diversity”, that is, differences between classifiers. Through data experiments, it is found that the sensitivity of MobileNetV3-Large is significantly higher than the specificity (see Table 1 in Section 4.2), while the sensitivity of MobileNetV2 is significantly lower than the specificity (see Tables 2 3 in Section 4.2). The recognition rates of the two models for the two categories are different. The former is more sensitive to recognize non-autistic children, and the latter is more sensitive to recognize autistic children. Therefore, integrate the results of the two classifiers can improve the classifying efficiency.
Table 1. The running time of the integrated classifier.
Table 2. The performance metrics of MobileNetV3-Large in the one-phase transfer learning.
Table 3. The performance metrics of MobileNetV3-Large in the two-phase transfer learning.
3.4. The proposed deep transfer learning method and its implementation framework
In this research, a deep transfer learning method based on the two-phase transfer learning mode and the multi-classifier integration mode is proposed. Because MobilenetV2 and MobileNetV3-Large have achieved high performance in image classification on ImageNet, they have a good structure for identifying 1,000 image categories and are fit for using as transferred models. Therefore, they are used as the classification model. First, the two-phase transfer learning model is used to improve the classification performance of these two models, and then the multi-classifier integration mode is used to integrate them to further improve the classification performance.
The implementation framework of the deep transfer learning method proposed is shown in Figure 2. In this figure, a dotted line divides the graph into upper and lower parts. The upper part demonstrates the two-phase transfer learning process, and the lower part demonstrates the multi-classifier integration process. The three dashed boxes in the upper part represent the source domain, intermediate domain and target domain of the two-phase transfer learning. There are two models in each box. The FC layer of each model in the intermediate domain is newly added, not the one in the source domain. The FC layer of each model in the target domain is also newly constructed, which is different from that of the intermediate domain and the source domain. There are 1,000 nodes in the FC layer in the source domain, 7 nodes in that of the intermediate domain, and 2 nodes in that of the target domain.
Figure 2. Implementation framework of transfer learning using the two-phase transfer mode and the multi-classifier integration mode (‘c ‘refers to the convolution layer; ‘bottleneck ‘refers to the basic building block of MobileNetV2, i.e., the bottleeck residual block; ‘bneck ‘refers to the basic building block of MobileNetV3-Large; ‘AP ‘refers to avgpool; ‘p ‘refers to pool; ‘FC ‘refers to the FC layer).
The task of normal facial expression recognition is similar to the task of detecting autistic children through facial expression. They are both facial expression recognition, but the former recognizes the expression of ordinary people and the latter recognizes the expression of autistic children. Therefore, the task of facial expression recognition can be used as an intermediate task of the two-phase transfer learning. In the first phase, the source domain is the classification task of 1,000 classes of images in ImageNet, and the target domain is the expression recognition task of CK+ facial expression image data. The transferred knowledge is the structure of the feature extraction layer (layers before FC layers) of MobileNetV2 and MobileNetV3-Large. Since the source domain classifies 1,000 categories, while the target domain classifies only 7 categories, the classification layer is reconstructed in the target domain, and an FC layer containing 7 nodes is constructed to classify 7 categories of facial expressions. The source domain of the second phase transfer is the target domain of the first phase transfer, and the target domain is the two-category classification task of non-autistic children and autistic children. The transferred knowledge is the structure and parameters of the feature extraction layer of the models, which are trained on the facial expression recognition dataset. The purpose is to improve the performance of the models. Facial expression recognition is a 7 classification task, and the target domain is a two-category classification task. Therefore, the classification layer is reconstructed in the target domain, and an FC layer containing 2 nodes is constructed for two-category classification.
In order to effectively integrate the classification results of MobileNetV2 and MobileNetV3-Large when using the multi-classifier integration mode, a calculation method of the multi-classifier integration is proposed, which uses the classification probability of every classifier to calculate the final probability and determine the category label. Suppose there are n classifiers and m categories, and the classification probability of classifier Ci for a sample is {Pi1, Pi2,.., Pim}, then the classification probability of integrated multiple classifiers is as shown in formula (1):
{
1
n
∑
i
=
1
n
P
i
1
,
1
n
∑
i
=
1
n
P
i
2
,
⋯
,
1
n
∑
i
=
1
n
P
i
m
}
(1)
If
1
n
∑
i
=
1
n
P
r
m
=
max
{
1
n
∑
i
=
1
n
P
i
1
,
1
n
∑
i
=
1
n
P
i
2
,
⋯
,
1
n
∑
i
=
1
n
P
i
m
}
, the sample label is r. For the two-category classification task using two models as classifiers, n = 2, m = 2, the classification probability of integrating the two classifiers is as shown in formula (2):
{
1
2
∑
i
=
1
2
P
i
1
,
1
2
∑
i
=
1
2
P
i
2
}
(2)
If
1
2
∑
i
=
1
2
P
i
1
=
max
{
1
2
∑
i
=
1
2
P
i
1
,
1
2
∑
i
=
1
2
P
i
2
}
, the category label is 0, otherwise it is 1.
Since the size of the image used in this research is 44 × 44, while the size of a image input to MobileNetV2 and MobileNetV3-Large is 224 × 224, therefore, it is necessary to modify the input and output of each layer of the two models. The structure of MobileNetV2 after modification is shown in Table 4; and that of MobileNetV3-Large is shown in Table 5. In Table 4, the value of K is 7 in the first phase of transfer learning, that is, 7 kinds of expression classification are performed on the face image. In the second phase of transfer learning, K is 2, i.e., the children’s facial images are classified as normal and autism.
Table 4. Modified MobileNetV2 (‘n’ is the number of repetitions of the bottleneck, ‘s’ is the stripe, and ‘t’ is the expansion factor).
Table 5. Modified MobileNetV3-Large [meaning of Se, NL, HS, re refer to ‘Table 4. Specification for MobileNetV3-Large’ in reference (Howard et al., 2019)].
4. Results and discussion
Two experiments were designed. The first was the one-phase transfer learning and the two-phase transfer learning experiment. Its purpose is to test whether the effect of the two-phase transfer learning is better than that of the one-phase transfer learning. The second was the multi-classifier integration experiment, which aimed to test whether the performance of the multi-classifier integration was better than that of any single classifier in the integration. The computer configuration used in this research is as follows:
1. CUP: AMD Ryzen 71,700 Eight-Core Processor 3.00GHz;
2. GPU: NVIDIA GeForce GTX 1050 Ti;
3. RAM: 32.0GB.
4.1. The experimental results and analysis of the one-phase transfer learning and the two-phase transfer learning
For the one-phase transfer learning, MobileNetV2 and MobileNetV3-Large were trained on the face image dataset of children. For the two-phase transfer learning, the two models were first trained on the CK + facial expression dataset, and then the structure and parameters of the pre-trained models were transferred to the children’s facial dataset for retraining to fine tune the parameters. The FC layers of the two models were modified in both transfer learning. Each model was trained for 10 times of the one-phase transfer learning and 10 times of the two-phase transfer learning, respectively.
The performance metrics and their average values of the 10 times of training of the one-phase transfer learning of MobileNetV2 are shown in Table 2, and that of the two-phase transfer learning are shown in Table 3. Table 6 is used to compare the average metrics of the two transfer learning. The average accuracy in Tables 7 8 exceeds 0.85, and the average AUC exceeds 0.92, which elucidate the effect of the two transfer learning of MobileNetV2 are excellent. The data in Table 9 shows that compared with the one-phase transfer learning, the accuracy of the two-phase transfer learning of MobileNetV2 is increased by 2.68%, the AUC is increased by 1.44%, the error rate is reduced by 2.68%, the sensitivity is increased by 5.07%, the specificity is increased by 0.05%, the G_Mean is increased by 2.67%, and the F_Measure is increased by 2.96%. Therefore, the effect of the two-phase transfer learning of MobileNetV2 is better than that of the one-phase transfer learning.
Table 6. The comparison between the one-phase transfer learning and the two-phase transfer learning of MobileNetV3-Large.
Table 7. The performance metrics of MobileNetV2 in the one-phase transfer learning.
Table 8. The performance metrics of MobileNetV2 in the two-phase transfer learning.
Table 9. The comparison between the one-phase transfer learning and the two-phase transfer learning of MobileNetV2.
In this research, 0 represents autism and 1 represents normal. Sensitivity indicates the accuracy of positive (1), while specificity means the accuracy of negative (0). By comparing the sensitivity and specificity in Tables 7 8, it can be found that MobileNetV2 presents greater specificity than sensitivity in the one-phase transfer learning and the two-phase transfer learning. In other words, the recognizing rate of the model for autistic children is greater than that for non-autistic children. In Table 9, the difference between sensitivity and specificity of the one-phase transfer learning is 12.04%, that of the two-phase transfer learning is 7.02%. From this, it can be concluded that the two-phase transfer learning reduces the difference between the two metrics. After the two-phase transfer learning, the sensitivity is significantly improved (5.07%), while the specificity is slightly improved (0.05%), that is, the recognition rate for non-autistic children was significantly improved, while the recognizing rate for autistic children was little improved.
For two-category classification problems, accuracy and AUC are the two most important metrics to measure classification performance. For 10 times of training of MobileNetV2, the accuracy of the one-phase transfer learning and the two-phase transfer learning are shown in the figure on the left of Figure 3, and that of ROC is on the right. In the figure on the left, the accuracy broken line of the two-phase transfer learning is above that of the one-phase transfer learning, indicating that the two-phase transfer learning is better than the one-phase transfer learning. The figure on the right shows the ROC curve of each AUC, with red lines representing the two-phase transfer learning (10 lines in total) and green lines representing the one-phase transfer learning (10 in total also). This figure illustrates that the area enclosed by each red line is larger than that of each green line, which indicates that the effect of the two-phase transfer learning is better than that of the one-phase transfer learning. However, the difference between the area enclosed by each red line and that of each green line is not particularly large, which also shows that the two-phase transfer learning performance of MobileNetV2 has slightly improved. In addition, in the left figure, the change ranges in the one-phase transfer learning is large, and the lines appear as oscillating broken lines. However, the change ranges of the accuracy in the two-phase transfer learning is small, and the lines appear as relatively gentle broken lines. This indicates that the model is not stable in the one-phase transfer learning, but in the two-phase transfer learning, the model tends to be stable.
Figure 3. The comparison of MobileNetV2 between the one-phase learning and the two-phase learning about accuracy and ROC.
For the one-phase transfer learning of MobileNetV3-Large, the metrics of each time of training and the average of 10 times are in Table 2, and that of the two-phase transfer learning are in Table 3. Table 6 is used to compare the average metrics of these two transfer learning. The average accuracy in Table 2 is 0.5532 and the average AUC is 0.5030. The values of the two metrics are too small, resulting in the model failing to be an excellent classifier. Therefore, the effect of the one-phase transfer learning of this model is poor. The average accuracy in Table 1 is greater than 0.85 and the average AUC value is greater than 0.92, which indicates the model is an excellent classifier and the effect of the two-phase transfer learning is fine. In Table 6, compared with the one-phase transfer learning, the accuracy of the two-phase transfer learning of MobileNetV3-Large has increased by 30.32%, the AUC has increased by 42.87%, the sensitivity has increased by 21.47%, the specificity has increased by 39.17%, the G_Mean has increased by 45.66%, the F_Measure has increased by 28.67%, the error rate decreased by 30.32%. The comparing results confirm that the effect of the two-phase transfer learning of MobileNetV3-Large is not only better than that of the one-phase transfer learning, but also the performance is greatly improved.
In Table 2, there are 6 times of training in which the sensitivity is greater than the specificity, 4 times of training in which the specificity is greater than the sensitivity. In Table 3, the sensitivity of every time of training is greater than the specificity. These data demonstrate that the recognition rate for non-autistic children and that for autistic children of MobileNetV3-Large after the one-phase transfer learning is not stable, but after the two-phase transfer learning, the two recognition rates are stable. From the average of the two transfer learning, the sensitivity is greater than the specificity, which indicates that the recognition rate of the model for non-autistic children is greater than that for autistic children. In Table 6, the sensitivity of the one-phase transfer learning is 29.3% greater than that of the specificity, and the sensitivity of the two-phase transfer learning is 11.6% greater than that of the specificity. It can be concluded that the two-phase transfer learning reduces the difference between the two metrics. Compared with the one-phase transfer learning, the improvement of specificity (39.17%) is significantly greater than that of sensitivity (21.47%), which indicates that the improvement of the recognition rate of MobileNetV3-Large for autistic children is greater than that for non-autistic children, which makes the recognition rate of the model for two types of children more average, so the classification performance of the model is improved.
For MobileNetV3-Large, the accuracy of 10 times of training for the one-phase transfer learning and the two-phase transfer learning are shown in the figure on the left of Figure 4, and the ROC is on the right. In the figure on the left, the accuracy line of the two-phase transfer learning is above that of the one-phase transfer learning, and the space between them is large. This discloses that the two-phase transfer learning can significantly improve the accuracy of the model. In addition, the upper broken line is gentle, while the lower broken line is oscillatory, indicating that the two-phase transfer learning can stabilize the accuracy. The figure on the right displays the ROC curve of each AUC, with ten red lines representing the two-phase transfer learning and ten green lines representing the one-phase transfer learning. This figure illustrates that the area enclosed by each red line is much larger than that of each green line, which indicates that the effect of the two-phase transfer learning is better than that of the one-phase transfer learning.
Figure 4. The comparison of MobileNetV3-Large between the one-phase learning and the two-phase learning about accuracy and ROC.
4.2. The experimental results and analysis of the multi-classifier integration
MobileNetV2 and MobileNetV3-Large were trained for many times and the models with large accuracy and AUC were found. Then, the two found models were tested on the test set, and the performance metrics of the two models were obtained. The performance metrics of the integrated classifier are calculated by using the calculation method of the multi-classifier integration proposed in this research. The performance metrics of MobileNetV2, MobileNetV3-Large and integrated classifier are shown in Table 10. In this table, compared with MobileNetV2, the accuracy of the integrated classifier is improved by 2.17%, the AUC is increased by 1.3%, and the sensitivity is improved by 6%. the G_Mean increased by 2.17%, the F_Measure increased by 2.57%. Compared with MobileNetV3-Large, the accuracy of the integrated classifier increased by 2.83%, the AUC increased by 1.67%, the error rate decreased by 2.83%, the sensitivity increased by 0.66%, the specificity increased by 5%, the G_Mean increased by 2.9%, and the F_Measure increased by 2.53%. These data illuminate that the performance of the integrated classifier is better than that of the single classifier. At the same time, it also proves that the calculation method of the multi-classifier integration is effective. Moreover, the difference between sensitivity and specificity of MobileNetV2 is 4%, that of MobileNetV3-Large is 8%, and that of integrated classifier is 3.66%. These data disclose that the integrated classifier has narrowed the gap between the sensitivity and specificity of the two participating classifiers.
Table 10. The comparison of MobileNetV2, MobileNetV3-Large and the integrated classifier.
In order to get the time when the integration classifier judges a picture, 600 pictures in the test set were input to the integration classifier. The results are shown in Table 1. In this table, the number in the first row represents the number of runs, and the number in the second row represents the running time, and the unit is second. Take the average running time of 20 times as the average time of judging 600 pictures by the integration classifier, and divide this time by 600 to get the average time of judging a picture, is about 0.5333 s.
In the computer hardware configuration environment, it only takes about half a second for the integrated classifier to judge a picture, which is relatively fast. If the integrated classifier is used in a mobile phone, the time to judge a picture is determined by the hardware configuration of the mobile phone.
4.3. The comparative analysis of the two-phase of transfer learning and the one-phase transfer learning
In this research, in the one-phase of transfer learning, the convolution layer structure of MobileNetV2 and MobileNetV3-Large trained on ImageNet is transferred to the recognition task of autistic children, and the model is trained on the children’s facial image dataset to adjust the model parameters, so as to improve the model performance. In the two-phase transfer learning, the convolution layer structure of MobileNetV2 and MobileNetV3-Large trained on ImageNet is first transferred to the facial expression recognition task, trained on the facial expression dataset to adjust the parameters, and then the convolution layer and parameters of the pre training models are transferred to the autistic children recognition task, and the models are trained on the children’s facial image dataset to adjust the model parameters, thus improving the model performance. The convolution layers of convolutional neural network is used to extract features of an image, which are reflected by the parameters of the convolution layers. ImageNet contains various pictures of mammals, birds, fish, reptiles, amphibian, vehicles, furniture, musical instruments, geological structures, tools, flowers, fruits, etc. The classifying task on this dataset is to divide the images into 1,000 categories. In the one-phase transfer learning and the first phase of the two-phase transfer learning, only the structure of the convolution layers without the parameters is transferred, because the classification tasks of the source domain and the target domain differ greatly, the parameters of the convolution layers trained on the source domain may not be applicable to the target domain. The facial features extracted from the task of the facial expression recognition are similar to those of children extracted from the task of recognizing the children with autism, so the facial features extracted from facial expression recognition tasks can be transferred to the task of recognizing the children with autism. In the second phase of the two-phase transfer, the structure and the parameters of the convolutional layers of the models trained on the facial image dataset in the intermediate domain are transferred to the target domain. This is because the classification tasks of the source domain and the target domain for identifying children with autism are similar, so the parameters trained on the source domain can be used for the target domain. In a word, the one-phase transfer learning only transfers the structure of a model, while the two-phase transfer learning transfers the structure and parameters of the model, and more knowledge is transferred. In addition, in the second phase of the two-phase transfer learning, the samples in the source domain are facial images, and the target domain is children’s facial images. Train the models in the source domain, and then transfer the structure and the and parameters of the pre trained model to the target domain. Train the models on two similar data sets, increasing the sample size, and thus improving the model performance. Compared with the one-phase transfer learning, the two-phase transfer learning transfers more knowledge and increases the sample amount, so its effect is better than the one-phase transfer learning.
4.4. The comparative analysis of the proposed method and other methods
The proposed method has more advantages compared to existing methods, and is explained from three aspects. Firstly, the size of image used in this research is 44 * 44, while the existing research use images of 224 * 224 or 299 * 299. The smaller the image, the smaller the memory space occupied, and the faster the model processes the images. Second, the existing methods use the one-phase Transfer learning, which transfers the model structure that performs well on ImageNet to the recognition task of autistic children; The proposed method uses the two-phase Transfer learning, which is more advantageous than the one-phase Transfer learning (see Section 3.3 for reasons). Third, the existing methods use the single classifier method, while the proposed method also uses the multi classifier integration method, which is an ensemble learning, integrating two classifiers, and the integrated classifier has better performance than a single classifier. The advantages of the multi classifier ensemble method are explained as follows.
Because the recognition rate of MobileNetV2 for autistic children is greater than that for non-autistic children, while the recognition rate of MobileNetV3-Large for non-autistic children is greater than that for autistic children, integrating the two models to obtain the advantages of both can improve the overall classification performance. In this research, the average probability for autistic children of MobileNetV2 and MobileNetV3-Large is calculated, so as the average probability for non-autistic children, and then take the maximum probability of the two average probabilities as the prediction probability of the integrated classifier, and the its label is the prediction label. This method uses two classifiers to judge the same sample at the same time, so it can obtain better performance than a single classifier.
4.5. Existing shortcomings
The proposed method only targets specific dataset partitioning and has not conducted experiments on different datasets. Therefore, further experiments are needed to determine the performance of the proposed method in different datasets partitioning. In addition, the proposed method was only tested on one dataset from the Kaggle platform and not on other datasets, as this dataset is public and difficult to obtain from other datasets. Therefore, further experiments are needed to determine the effectiveness of the proposed method on other datasets.
In addition, children can only be classified into normal and autism in this research, but not into more detailed categories such as mild, moderate and severe autism. This is because the children’s facial expression datasets have only two labels: normal and autism. If more than two classification are needed, images in the dataset should be labeled to more classifications. It is very difficult to do this work. Autism experts cannot give the exact answer only by a child’s facial expression image, so it is very difficult to relabel the dataset. In addition, most images in the datasets are facial images of European and American children. Whether the trained model is applicable to autistic children of other races, such as Asian autistic children, remains to be further studied, because the facial features of different races are still significantly different. Only after numerous of children’s facial expression images with different autistic degree obtained, autistic children can be classified into multiple categories.
4.6. Future work
In the future, we plan to develop a mobile app for identifying children with autism. The users can capture a set of facial images of children and input them into the app. This app determines the probability of autism for each image, and then provides the average probability of autism for all images. In addition, we plan to perform statistical tests and sensitivity analysis to test the robustness and stability of our algorithms in the future work. We also plan to collaborate with kindergartens, rehabilitation centers for children with autism, and child psychology clinics in hospitals. Kindergarten teachers who observe a child with abnormal behavior can use this app to detect whether this child has autism. If the test result is autism, the child’s parents can be reminded to send this child to the children’s psychological clinic at the hospital for professional examination. By collaborating with the Rehabilitation Center for Children with Autism and the Child Psychology Clinic, we will obtain a lot of facial images of children with autism, and then train the models of the proposed method to improve its performance.
5. Conclusion
At present, human experts mainly use the scale to detect children’s autism, and may also use medical equipment such as CT to assist the detection. The detection process is relatively complex and the detection time is relatively long. By the proposed method, the detection of autism through children’s facial images is short in time and easy to operate. It can quickly screen out suspected autistic children, help experts ignore non-autistic children, and also reduce the omission of autistic children.
In this research, autistic children are detected by classifying children into two categories: autism and normal. MobileNetV2 and MobileNetV3-Large were taken as transfer learning models because they are fit for a mobile phone. Carried out the one-phase transfer learning and the two-phase transfer learning experiments on them. Data experiments confirm that the performance of MobileNetV2 is much better than MobileNetV3-Large after the one-phase transfer learning, and the performance of both models is improved after the two-phase transfer learning. However, the performance of MobileNetV3-Large has been greatly improved, while MobileNetV2 is only slightly improved, but the performance of MobileNetV2 is still better than that of MobileNetV3-Large. Then, the multi-classifier integration mode is adopted for the two models after the two-phase transfer learning, and a calculation method of the multi-classifier integration is proposed to obtain the final classification results. The experimental results indicate that the performance of the integrated classifier is better than any of the two models.
The two-phase transfer learning improves the accuracy and the AUC than the one-phase transfer learning, and the integrated classifier improves the two metrics further. The differences between this research and the existing studies lie in the different transfer learning methods and size of an input image. The one-phase transfer learning has used in existing studies, while the two-phase transfer learning and the multi-classifier integration methods have used in this research. The size of an input image of the two models in this research is 44 × 44, while that of the existing research is 224 × 224 or 299 × 299. The size of and image of this research is reduced by 4–6 times, the training time of the model is shorter, the speed is faster, and the accuracy can reach 0.905 and the AUC reach 0.9632, which was 3.51% greater than the AUC (0.9281) of the existing research.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material.
Author contributions
YL: conceptualization, writing—original draft, and software. W-CH: data curation, writing—review and editing. P-HS: supervision, validation, writing—review and editing. All authors contributed to the article and approved the submitted version.
Funding
This research has been partially supported by the National Natural Science Foundation of China (Grant Nos. 62062051 and 61866006).
Acknowledgments
Thank Jing Qin and Ke Huang for their help in obtaining the image dataset of autistic children.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Akter, T., Ali, M. H., Khan, M. I., Satu, M. S., Uddin, M. J., Alyami, S. A., et al. (2021). Improved transfer-learning-based facial recognition framework to detect autistic children at an early stage. Brain Sci. 11:734. doi: 10.3390/brainsci11060734
PubMed Abstract | CrossRef Full Text | Google Scholar
Alcañiz Raya, M., Marín-Morales, J., Minissi, M. E., Teruel Garcia, G., Abad, L., and Chicchi Giglioli, I. A. (2020). Machine learning and virtual reality on body Movements’ Behaviors to classify children with autism Spectrum disorder. J. Clin. Med. 9:1260. doi: 10.3390/jcm9051260
PubMed Abstract | CrossRef Full Text | Google Scholar
Aldridge, K., George, I. D., Cole, K. K., Austin, J. R., Takahashi, T. N., Duan, Y., et al. (2011). Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes. Mol. Autism. 2:15. doi: 10.1186/2040-2392-2-15
PubMed Abstract | CrossRef Full Text | Google Scholar
Ali, M. T., ElNakieb, Y., Elnakib, A., Shalaby, A., Mahmoud, A., Ghazal, M., et al. (2022). The role of structure MRI in diagnosing autism. Diagnostics 12:165. doi: 10.3390/diagnostics12010165
CrossRef Full Text | Google Scholar
Alsaade, F. W., and Alzahrani, M. S. (2022). Classification and detection of autism Spectrum disorder based on deep learning algorithms. Comput. Intell. Neurosci. 2022:8709145. doi: 10.1155/2022/8709145
PubMed Abstract | CrossRef Full Text | Google Scholar
Alves, C. L., Toutain, T. G. L. O., de Carvalho Aguiar, P., Pineda, A. M., Roster, K., Thielemann, C., et al. (2023). Diagnosis of autism spectrum disorder based on functional brain networks and machine learning. Sci. Rep. 13:8072. doi: 10.1038/s41598-023-34650-6
CrossRef Full Text | Google Scholar
Bölte, S., Bartl-Pokorny, K. D., Jonsson, U., Berggren, S., Zhang, D., Kostrzewa, E., et al. (2016). How can clinicians detect and treat autism early? Methodological trends of technology use in research. Acta Paediatr. 105, 137–144. doi: 10.1111/apa.13243
PubMed Abstract | CrossRef Full Text | Google Scholar
Carpenter, K. L. H., Hahemi, J., Campbell, K., Lippmann, S. J., Baker, J. P., Egger, H. L., et al. (2021). Digital Behavioral phenotyping detects atypical pattern of facial expression in toddlers with autism. Autism Res. 14, 488–499. doi: 10.1002/aur.2391
PubMed Abstract | CrossRef Full Text | Google Scholar
Chu, J. H., Wu, Z. R., Lv, W., and Li, Z. (2018). Breast cancer diagnosis System Base on transfer learning and deep convolutional neural networks. Laser Optoelectron. Progr. 55, 201–207. doi: 10.3788/LOP55.081001
CrossRef Full Text | Google Scholar
Cilia, F., Carette, R., Elbattah, M., Dequen, G., Guérin, J. L., Bosche, J., et al. (2021). Computer-aided screening of autism Spectrum disorder: eye-tracking study using data visualization and deep learning. JMIR Hum. Factors 8:e27706. doi: 10.2196/27706
PubMed Abstract | CrossRef Full Text | Google Scholar
Conti, E., Retico, A., Palumbo, L., Spera, G., Bosco, P., Biagi, L., et al. (2020). Autism Spectrum disorder and childhood apraxia of speech: early language-related hallmarks across structural MRI study. J. Pers. Med. 10:275. doi: 10.3390/jpm10040275
PubMed Abstract | CrossRef Full Text | Google Scholar
Doi, H., Tsumura, N., Kanai, C., Masui, K., Mitsuhashi, R., and Nagasawa, T. (2021). Automatic classification of adult males with and without autism Spectrum disorder by non-contact measurement of autonomic nervous system activation. Front. Psych. 12:625978. doi: 10.3389/fpsyt.2021.625978
PubMed Abstract | CrossRef Full Text | Google Scholar
Drimalla, H., Scheffer, T., Landwehr, N., Baskow, I., Roepke, S., Behnia, B., et al. (2020). Towards the automatic detection of social biomarkers in autism spectrum disorder: introducing the simulated interaction task (SIT). NPJ Digit. Med. 3:25. doi: 10.1038/s41746-020-0227-5
PubMed Abstract | CrossRef Full Text | Google Scholar
ElNakieb, Y., Ali, M. T., Elnakib, A., Shalaby, A., Mahmoud, A., Soliman, A., et al. (2023). Understanding the role of connectivity dynamics of resting-State functional MRI in the diagnosis of autism Spectrum disorder: a comprehensive study. Bioengineering 10:56. doi: 10.3390/bioengineering10010056
CrossRef Full Text | Google Scholar
ElNakieb, Y., Ali, M. T., Elnakib, A., Shalaby, A., Soliman, A., Mahmoud, A., et al. (2021). The role of diffusion tensor MR imaging (DTI) of the brain in diagnosing autism Spectrum disorder: promising results. Sensors 21:8171. doi: 10.3390/s21248171
PubMed Abstract | CrossRef Full Text | Google Scholar
Eslami, T., Mirjalili, V., Fong, A., Laird, A. R., and Saeed, F. (2019). ASD-Diag net: a hybrid learning approach for detection of autism Spectrum disorder using fMRI data. Front. Neuroinform. 13:70. doi: 10.3389/fninf.2019.00070
CrossRef Full Text | Google Scholar
Garbulowski, M., Smolinska, K., Diamanti, K., Pan, G., Maqbool, K., Feuk, L., et al. (2021). Interpretable machine learning reveals dissimilarities between subtypes of autism Spectrum disorder. Front. Genet. 2021:618277. doi: 10.3389/fgene.2021.618277
CrossRef Full Text | Google Scholar
Guha, T., Yang, Z., Grossman, R. B., and Narayanan, S. S. (2018). A computational study of expressive facial dynamics in children with autism. IEEE Trans. Affect. Comput. 9, 14–20. doi: 10.1109/TAFFC.2016.2578316
PubMed Abstract | CrossRef Full Text | Google Scholar
Gui, G., Bussu, C., Tye, M., Elsabbagh, G., Pasco, T., Charman, M. H., et al. (2021). Attentive brain states in infants with and without later autism. Transl. Psychiatry 11:196. doi: 10.1038/s41398-021-01315-9
CrossRef Full Text | Google Scholar
Gunning, M., and Pavlidis, P. (2021). “guilt by association” is not competitive with genetic association for identifying autism risk genes. Sci. Rep. 11:15950. doi: 10.1038/s41598-021-95321-y
CrossRef Full Text | Google Scholar
Hammond, P., Forster-Gibson, C., Chudley, A. E., Allanson, J. E., Hutton, T. J., Farrell, S. A., et al. (2008). Face–brain asymmetry in autism spectrum disorders. Mol. Psychiatry 13, 614–623. doi: 10.1038/mp.2008.18
PubMed Abstract | CrossRef Full Text | Google Scholar
Hao, R., Namdar, K., Liu, L., and Khalvati, F. (2021). A transfer learning-based active learning framework for brain tumor classification. Front. Artif. Intell. 2021:635766. doi: 10.3389/frai.2021.635766
CrossRef Full Text | Google Scholar
Hosseini, M. P., Beary, M., Hadsell, A., Messersmith, R., and Soltanian-Zadeh, H. (2022). Deep learning for autism diagnosis and facial analysis in children. Front. Comput. Neurosci. 2022:789998. doi: 10.3389/fncom.2021.789998
CrossRef Full Text | Google Scholar
Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M. X., et al. (2019). “Searching for MobileNetV3,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South), 2019, 1314–1324.
Google Scholar
Jacob, S., Wolff, J. J., Steinbach, M. S., Doyle, C. B., Kumar, V., and Elison, J. T. (2019). Neurodevelopmental heterogeneity and computational approaches for understanding autism. Transl. Psychiatry 9:63. doi: 10.1038/s41398-019-0390-0
CrossRef Full Text | Google Scholar
Jiao, Z., Li, H., and Fan, Y. (2020). Improving diagnosis of autism Spectrum disorder and disentangling its heterogeneous functional connectivity patterns using capsule networks. Proc. IEEE Int. Symp. Biomed. Imaging 2020, 1331–1334. doi: 10.1109/isbi45749.2020.9098524
PubMed Abstract | CrossRef Full Text | Google Scholar
Kanhirakadavath, M. R., and Chandran, M. (2022). Investigation of eye-tracking scan path as a biomarker for autism screening using machine learning algorithms. Diagnostics 12:518. doi: 10.3390/diagnostics12020518
CrossRef Full Text | Google Scholar
Lai, M., Lee, J., Chiu, S., Charm, J., So, W. Y., Yuen, F. P., et al. (2020). A machine learning approach for retinal images analysis as an objective screening method for children with autism spectrum disorder. EClinicalMedicine 2020:100588. doi: 10.1016/j.eclinm.2020.100588
CrossRef Full Text | Google Scholar
Leming, M. J., Baron-Cohen, S., and Suckling, J. (2021). Single-participant structural similarity matrices lead to greater accuracy in classification of participants than function in autism in MRI. Mol. Autism. 12:34. doi: 10.1186/s13229-021-00439-5
CrossRef Full Text | Google Scholar
Li, Y., and Song, P. H. (2022). Review of transfer learning in medical image classification. J. Image Graphics 27, 0672–0686. doi: 10.11834/jig.210814
CrossRef Full Text | Google Scholar
Li, Y. Y., Wang, Y. M., Zhou, Q., Li, Y. X., Wang, Z., Wang, J., et al. (2021). Deep learning model of colposcopy image based on cervical epithelial and vascular features. Fudan Univ. J. Med. Sci. 48, 435–442. doi: 10.3969/j.issn.1672-8467.2021.04.001
CrossRef Full Text | Google Scholar
Lin, P. I., Moni, M. A., Gau, S. S., and Eapen, V. (2021). Identifying subgroups of patients with autism by gene expression profiles using machine learning algorithms. Front. Psych. 12:637022. doi: 10.3389/fpsyt.2021.637022
PubMed Abstract | CrossRef Full Text | Google Scholar
Liu, W., Li, M., Zou, X., and Raj, B. (2021). Discriminative dictionary learning for autism Spectrum disorder identification. Front. Comput. Neurosci. 2021:662401. doi: 10.3389/fncom.2021.662401
CrossRef Full Text | Google Scholar
Lord, C., Brugha, T. S., Charman, T., Cusack, J., Dumas, G., Frazier, T., et al. (2020). Autism spectrum disorder. Nat. Rev. Dis. Primers. 6:5. doi: 10.1038/s41572-019-0138-4
CrossRef Full Text | Google Scholar
Lu, A., and Perkowski, M. (2021). Deep learning approach for screening autism Spectrum disorder in children with facial images and analysis of Ethnoracial factors in model development and application. Brain Sci. 11:1446. doi: 10.3390/brainsci11111446
PubMed Abstract | CrossRef Full Text | Google Scholar
Mahbod, A., Schaefer, G., Wang, C., Dorffner, G., Ecker, R., and Ellinger, I. (2020). Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. Comput. Methods Prog. Biomed. 2020:105475. doi: 10.1016/j.cmpb.2020.105475
CrossRef Full Text | Google Scholar
Martinez, C., and Chen, Z. S. (2023). Identification of atypical sleep microarchitecture biomarkers in children with autism spectrum disorder. Front. Psych. 14:1115374. doi: 10.3389/fpsyt.2023.1115374
CrossRef Full Text | Google Scholar
Moon, S. J., Hwang, J., Kana, R., Torous, J., and Kim, J. W. (2019). Accuracy of machine learning algorithms for the diagnosis of autism Spectrum disorder: systematic review and meta-analysis of brain magnetic resonance imaging studies. JMIR Ment. Health 6:e14108. doi: 10.2196/14108
PubMed Abstract | CrossRef Full Text | Google Scholar
Mujeeb Rahman, K. K., and Subashini, M. M. (2022). Identification of autism in children using static facial features and deep neural networks. Brain Sci. 12:94. doi: 10.3390/brainsci12010094
PubMed Abstract | CrossRef Full Text | Google Scholar
Nunes, S., Mamashli, F., Kozhemiako, N., Khan, S., McGuiggan, N. M., Losh, A., et al. (2021). Classification of evoked responses to inverted faces reveals both spatial and temporal cortical response abnormalities in autism spectrum disorder. Neuroimage Clin. 2021:102501. doi: 10.1016/j.nicl.2020.102501
CrossRef Full Text | Google Scholar
Øien, R. A., Schjølberg, S., Volkmar, F. R., Shic, F., Cicchetti, D. V., Nordahl-Hansen, A., et al. (2018). Clinical features of children with autism who passed 18-month screening. Pediatrics 141:e20173596. doi: 10.1542/peds.2017-3596
PubMed Abstract | CrossRef Full Text | Google Scholar
Payabvash, S., Palacios, E. M., Owen, J. P., Wang, M. B., Tavassoli, T., Gerdes, M., et al. (2019). White matter connectome edge density in children with autism Spectrum disorders: potential imaging biomarkers using machine-learning models. Brain Connect. 9, 209–220. doi: 10.1089/brain.2018.0658
PubMed Abstract | CrossRef Full Text | Google Scholar
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L. C. (2018). “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA, 2018, 4510–4520.
Google Scholar
Shi, C., Xin, X., and Zhang, J. (2021). Domain adaptation using a three-way decision improves the identification of autism patients from multisite fMRI data. Brain Sci. 11:603. doi: 10.3390/brainsci11050603
PubMed Abstract | CrossRef Full Text | Google Scholar
Siddiqui, S., Gunaseelan, L., Shaikh, R., Khan, A., Mankad, D., and Hamid, M. A. (2021). Food for thought: machine learning in autism Spectrum disorder screening of infants. Cureus 13:e18721. doi: 10.7759/cureus.18721
PubMed Abstract | CrossRef Full Text | Google Scholar
Takahashi, Y., Murata, S., Idei, H., Tomita, H., and Yamashita, Y. (2021). Neural network modeling of altered facial expression recognition in autism spectrum disorders based on predictive processing framework. Sci. Rep. 11:14684. doi: 10.1038/s41598-021-94067-x
CrossRef Full Text | Google Scholar
Tariq, Q., Daniels, J., Schwartz, J. N., Washington, P., Kalantarian, H., and Wall, D. P. (2018, 2018). Mobile detection of autism through machine learning on home video: a development and prospective validation study. PLoS Med. 15:e1002705. doi: 10.1371/journal.pmed.1002705
PubMed Abstract | CrossRef Full Text | Google Scholar
Vabalas, A., Gowen, E., Poliakoff, E., and Casson, A. J. (2020). Applying machine learning to kinematic and eye movement features of a movement imitation task to predict autism diagnosis. Sci. Rep. 10:8346. doi: 10.1038/s41598-020-65384-4
CrossRef Full Text | Google Scholar
Yahata, N., Morimoto, J., Hashimoto, R., Lisi, G., Shibata, K., Kawakubo, Y., et al. (2016). A small number of abnormal brain connections predicts adult autism spectrum disorder. Nat. Commun. 2016:11254. doi: 10.1038/ncomms11254
CrossRef Full Text | Google Scholar
Zhang, R., Guo, Z., Sun, Y., Lu, Q., Xu, Z., Yao, Z., et al. (2020). COVID19XrayNet: A two-step transfer learning model for the COVID-19 detecting problem based on a limited number of chest X-ray images. Interdiscip. Sci. 12, 555–565. doi: 10.1007/s12539-020-00393-5
PubMed Abstract | CrossRef Full Text | Google Scholar