等级相关,在Python

问题描述:

我的数据是一组Ñ观察到对与它们的频率,即,每对(X ,Y 有对应一些ķ沿,次的数目(×,Y 进行了观察。理想情况下,我想这两者进行计算Kendall的tau和Spearman的Rho为集这些对所有的副本,它由ķ + K + ... + K ň双。的问题是,ķ + K 2 + ... + K Ñ,观测的总数量,是巨大的,这样的数据结构将不适合在存储器中。等级相关,在Python

当然,我想有关分配的频率我个对,ķ /(K 1 + K 2 + ... + K Ñ,作为其权重和计算权重集—的等级相关性,但我找不到任何工具。在我遇到的加权等级相关品种(例如,scipy.stats.weightedtau)中,权重表示等级而非配对的重要性,这与我的原因无关。皮尔森的似乎有我需要的权重选项,但它不符合我的目的,因为 x y无处与线性相关。我想知道我是否错过了关于加权数据点的广义相关性的一些概念。

到目前为止,我得到的唯一想法是缩小ķ,K ,...,通过一些常见的因素Çķñ,使比例数的个对拷贝是 [K /C](这里 []是舍入演算器,因为我们需要使每一对拷贝整数)。通过选择Ç使得 [K/C] + [K/C] + ... + [K Ñ/C]对可以放入存储器中,我们然后可以计算所得到的组的相关系数tau和rho。然而,ķķĴ可以通过许多数量级不同,所以Ç可以显著大一些ķ因此四舍五入ķ/C可能会导致信息丢失。

UPD:一个可以计算斯皮尔曼的Rho具有p值沿着具有指定频率的权重,如下一个数据集:

def frequency_pearsonr(data, frequencies): 
    """ 
    Calculates Pearson's r between columns (variables), given the 
    frequencies of the rows (observations). 

    :param data: 2-D array with data 
    :param frequencies: 1-D array with frequencies 
    :return: 2-D array with pairwise correlations, 
     2-D array with pairwise p-values 
    """ 
    df = frequencies.sum() - 2 
    Sigma = np.cov(data.T, fweights=frequencies) 
    sigma_diag = Sigma.diagonal() 
    Sigma_diag_pairwise_products = np.multiply.outer(sigma_diag, sigma_diag) 
    # Calculate matrix with pairwise correlations. 
    R = Sigma/np.sqrt(Sigma_diag_pairwise_products) 
    # Calculate matrix with pairwise t-statistics. Main diagonal should 
    # get 1/0 = inf. 
    with np.errstate(divide='ignore'): 
     T = R/np.sqrt((1 - R * R)/df) 
    # Calculate matrix with pairwise p-values. 
    P = 2 * stats.t.sf(np.abs(T), df) 

    return R, P 


def frequency_rank(data, frequencies): 
    """ 
    Ranks 1-D data array, given the frequency of each value. Same 
    values get same "averaged" ranks. Array with ranks is shaped to 
    match the input data array. 

    :param data: 1-D array with data 
    :param frequencies: 1-D array with frequencies 
    :return: 1-D array with ranks 
    """ 
    s = 0 
    ranks = np.empty_like(data) 
    # Compute rank for each unique value. 
    for value in sorted(set(data)): 
     index_grid = np.ix_(data == value) 
     # Find total frequency of the value. 
     frequency = frequencies[index_grid].sum() 
     ranks[index_grid] = s + 0.5 * (frequency + 1) 
     s += frequency  

    return ranks 


def frequency_spearmanrho(data, frequencies): 
    """ 
    Calculates Spearman's rho between columns (variables), given the 
    frequencies of the rows (observations). 

    :param data: 2-D array with data 
    :param frequencies: 1-D array with frequencies 
    :return: 2-D array with pairwise correlations, 
     2-D array with pairwise p-values 
    """ 
    # Rank the columns. 
    ranks = np.empty_like(data) 
    for i, data_column in enumerate(data.T): 
     ranks[:, i] = frequency_rank(data_column, frequencies) 
    # Compute Pearson's r correlation and p-values on the ranks. 
    return frequency_pearsonr(ranks, frequencies) 


# Columns are variables and rows are observations, whose frequencies 
# are specified. 
data_col1 = np.array([1, 0, 1, 0, 1]) 
data_col2 = np.array([.67, .25, .75, .2, .6]) 
data_col3 = np.array([.1, .3, .8, .3, .2]) 
data = np.array([data_col1, data_col2, data_col3]).T 
frequencies = np.array([2, 4, 1, 3, 2]) 

# Same data, but with observations (rows) actually repeated instead of 
# their frequencies being specified. 
expanded_data_col1 = np.array([1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1]) 
expanded_data_col2 = np.array([.67, .67, .25, .25, .25, .25, .75, .2, .2, .2, .6, .6]) 
expanded_data_col3 = np.array([.1, .1, .3, .3, .3, .3, .8, .3, .3, .3, .2, .2]) 
expanded_data = np.array([expanded_data_col1, expanded_data_col2, expanded_data_col3]).T 

# Compute Spearman's rho for data in both formats, and compare. 
frequency_Rho, frequency_P = frequency_spearmanrho(data, frequencies) 
Rho, P = stats.spearmanr(expanded_data) 
print(frequency_Rho - Rho) 
print(frequency_P - P) 

上面的具体实施例表明,这两种方法产生相同的相关性和相同的p值:

[[ 0.00000000e+00 0.00000000e+00 0.00000000e+00] 
[ 1.11022302e-16 0.00000000e+00 -5.55111512e-17] 
[ 0.00000000e+00 -5.55111512e-17 0.00000000e+00]] 
[[ 0.00000000e+00 -1.35525272e-19 4.16333634e-17] 
[ -9.21571847e-19 0.00000000e+00 -5.55111512e-17] 
[ 4.16333634e-17 -5.55111512e-17 0.00000000e+00]] 
+0

要计算加权Spearman秩相关系数,你可以简单地预排名的x和y的值,然后推到那些'pearsonr'(与你一起的权重),以获得加权斯皮尔曼的Rho退了出去。 – Paul

+0

不确定以下方法的统计有效性,但从技术角度来看,您可以简单地将(预先计算的)字典映射等级封装到函数中的标准化频率中,并将其作为“称重器”传递给“weightedtau”。 – Paul

+0

让我得到你的问题直,ķ + K + ... + K ň对观测是太大,无法在RAM中。你能计算一个随机样本的等级相关性,增加样本量,重复这个过程直到估计的等级相关性低于某个阈值水平? –

计算Kendall的tau,由保罗建议的做法,有效。您不必将排序数组的索引作为等级,但未排序数组的索引同样正常(如加权tau中的示例所示)。权重也不需要标准化。

定期(未加权)Kendall的tau(在 “扩展” 数据集):

stats.kendalltau([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1], 
       [.25, .25, .25, .25, .2, .2, .2, .667, .667, .75, .6, .6]) 
KendalltauResult(correlation=0.7977240352174656, pvalue=0.0034446936330652677) 

加权Kendall的tau(与发生次数的数据集作为权重):

stats.weightedtau([1, 0, 1, 0, 1], 
        [.667, .25, .75, .2, .6], 
        rank=False, 
        weigher=lambda r: [2, 4, 1, 3, 2][r], 
        additive=False) 
WeightedTauResult(correlation=0.7977240352174656, pvalue=nan) 

现在,由于weightedtau实现的特殊性,p值永远不会被计算。我们可以用最初提供的缩小事件的技巧来近似p值,但我非常感谢其他方法。根据可用的内存量决定算法行为对我来说看起来很痛苦。