Structure-Preserved Multi-Source Domain Adaptation_notebook
Abstract
we aim to preserve the whole structure from source domains and transfer it to serve the task on the target domain. The source and target data are put together for clustering,which simultaneously explores the structures of the source and target domains. The structure-preserved information from source domain further guides the clustering process on the target domain.
Introduction
there are two strategies for adaptations:
-
feature space adaptation
-
classifier adaptation.
Focus on: multi-source unsupervised domain adaptation
we are the first to formulate multi-source domain unsupervised adaptation into a semi-supervised clustering framework [15]
Experience show:
Related work
Here we give a brief introduction on unsupervised domain adaptation and multi-source domain adaptation, respectively, and highlight the difference between their works and ours.
discovering latent domains
The proposed method
- Given the problem and Symbol mark:
- alignment projections P1 and P2;
- Here suppose that the alignment projections P1 and P2 are given, we start from the Zs1 , Zs2 , Zt1 and Zt2.(why, how get the P1 and P2?)
A.Problem Defifinition
- How to incorporate the structure of different domains to predict the labels of the target data?
B.Objective Function
They formulate the problem as a clustering problem.Inspired by the ensemble clustering [16], [17], the source and target data are put together for clustering, which explores the structures of target domain as well as keeps the structures of source domains consistent with the label information as much as possible.
- The objective function of model is EQ(1) :
The objective function consists of two parts.
- One is the standard K-means with squared Euclidean distance for the combined source and target data,
- the other is a term measuring the disagreement between the indicator matrices Hs1 , Hs2 and the label information of the source domains.
Based on previous work [32], we have a new insight of our objective function in Eq. 1.
By this means, we have a new insight of our objective function, which can be rewritten (eq1) in the following formulation.
Solutions
Since the problem in Eq4 is not jointly convex to all the variables, here we iteratively update each unknown variable by taking derivation.
By taking the derivative
- A.Fixing others, Update G1, G2.
i.omitted - B.Fixing others, Update M1, M2.
i.omitted - C.Fixing others, Update Hs1, Hs2.
i.omitted
Difference:we use a exhaustive search for the optimal assignment to find the solutions - D.Fixing others, Update Ht.
i.omitted
Experimental Results
Evaluating the performance of our proposed method in terms of object recognition and face identification compared with several state-of-the-arts.
- A.Experiment setting
i.Databases
ii.Competitive methods and implementation details. - B.Object recognition
i.Results of single source
ii.Results of muti-sources
iii.Parameter analysis - C.Face identification
i.Domain adaptation results
Conclusions
In this paper, we proposed a novel algorithm for multi-source unsupervised domain adaptation. Different from the existing studies, which learned a classifier in the common space with the source data and predicted the labels for target data, we preserved the whole structures of source domain for the task on the target domain. To our best knowledge, we were the fifirst to formulate the problem into a semisupervised clustering problem with missing values. Extensive experiments on two widely used databases demonstrated the large improvements of our proposed method over several state-of-the-art methods.