TY - JOUR AU - Zuo, Xin AB - Over recent decades have witnessed considerable progress in whether multi-task learning or multi-view learning, but the situation that considers both learning scenes simultaneously has received not too much attention. How to utilize multiple views’ latent representation of each single task to improve each learning task’s performance is a challenge problem. Based on this, we proposed a novel semi-supervised algorithm, termed as multi-task multi-view learning based on common and special features (MTMVCSF). In general, multi-views are the different aspects of an object and every view includes the underlying common or special information of this object. As a consequence, we will mine multiple views’ jointly latent factor of each learning task, jointly latent factor is consisted of each view’s special feature and the common feature of all views. By this way, the original multi-task multi-view data have degenerated into multi-task data, and exploring the correlations among multiple tasks enables to make an improvement on the performance of learning algorithm. Another obvious advantage of this approach is that we get latent representation of the set of unlabeled instances by the constraint of regression task with labeled instances. In classification and semi-supervised clustering tasks, using implicit representation as input peforms much better than TI - Multi-view representation learning in multi-task scene JF - Neural Computing and Applications DO - 10.1007/s00521-019-04577-z DA - 2019-10-29 UR - https://www.deepdyve.com/lp/springer-journals/multi-view-representation-learning-in-multi-task-scene-MS5yNyrt09 SP - 1 EP - 20 VL - OnlineFirst IS - DP - DeepDyve ER -