This proposal will utilize the capability of the deep neural network to extract the essential spatial features from the artworks of calligraphy written by the writers with different skill levels and the paintings by four world-renowned artists for differentiating the painters’ styles and the writers’ skill levels. The four artists are selected because of their styles including the key characteristics we would like to pinpoint, namely the brush stroke (as a feature at small spatial scale) and spotlight effect (as a feature at large spatial scale). Such two different levels of spatial features may be represented by the learned kernels at different hidden layers of a convolution neural network (CNN) that represent different spatial scale due to the operation of the CNN model. Such spatial features in the trained CNN model will be further compared to the eye tracker results from the simultaneous EEG/eye tracker and fMRI/eye tracker experiment to see if the trained spatial features are co-located with the gaze centers in the eye tracker data for differentiating the writers’ skill levels as well as the styles of the artists done by the human participants. Then, the gaze centers in the eye tracker data will further be used to guide the data analysis of both the EEG and fMRI data to find the gaze-related brain activations. We hypothesize that the visual process-related brain areas/circuits should be mostly highlighted at this stage of the data analysis. Therefore, we may be able to find the correspondence between the artificial neural network and the human neural network in processing the visual features from the artworks, which facilitate the final differentiation work. Finally, the brain networks initiated from the gaze-related brain activations will be further estimated using the phase-locking value and partial directed coherence in the EEG data and the Granger causality in the fMRI data. As a result, we should be able to delineate the higher level brain activity associated with the differentiation work. These higher level brain activation may be the key components that set apart the human neural network from the artificial neural network in appreciating the visual artworks.
|Effective start/end date||1/08/20 → 31/07/21|
UN Sustainable Development Goals
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):
- Neuroimaging technology
- Functional MRI
- Eye tracker
- Deep Learning
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.