Volume - 13 | Issue-1
Volume - 13 | Issue-1
Volume - 13 | Issue-1
Volume - 13 | Issue-1
Volume - 13 | Issue-1
Multimodal sentiment analysis has gained significant attention in recent years due to the increasing use of multimedia data in various applications. In this paper, we propose an efficient transfer learning model with autoencoders for multimodal sentiment analysis via deep sentiment networks. Our approach utilizes text, audio, image, and video modalities separately and fuses them via boosting operations to improve the overall performance of the model. We first pre-train the autoencoder on each modality to extract the relevant features and then transfer the learned representations to the sentiment network.