knn可视化、densesift原理、手势识别
KNN可视化
一.KNN算法简介
KNN,即K近邻法(k-nearst neighbors),所谓的k最近邻,就是指最接近的k个邻居(数据),即每个样本都可以由它的K个邻居来表达。kNN算法的核心思想是,在一个含未知样本的空间,可以根据离这个样本最邻近的k个样本的数据类型来确定样本的数据类型。在scikit-learn 中,与近邻法这一大类相关的类库都在sklearn.neighbors包之中。其中分类器有KNN分类树KNeighborsClassifier、限定半径最近邻分类树的类RadiusNeighborsClassifier以及最近质心分类算法NearestCentroid等等。前两种分类算法中,scikit-learn实现两个不同的最近邻分类器:KNeighborsClassifier基于每个查询点的k个最近邻点实现学习,其中k是用户指定的最近邻数量。 RadiusNeighborsClassifier基于每个训练点的固定半径r内的最近邻搜索实现学习,其中r是用户指定的半径浮点值。关于这两种分类器的差别可以参考KNN算法的KD树和球树进行了解。
二.Python可视化实现KNN算法
这里通过python的绘图工具Matplotlib包可视化实现机器学习中的KNN算法。 需要提前安装python的Numpy和Matplotlib包。 KNN–最近邻分类算法,算法逻辑比较简单,思路如下:
1.设一待分类数据iData,先计算其到已标记数据集中每个数据的距离;
2.然后根据离iData最近的k个数据的分类,出现次数最多的类别定为iData的分类。
三.实现代码及结果
1.实现代码如下:
下面的代码主要是随机生成两个不同数据集数据集,其中normal为两个正态分布数据集,ring为正态分布且环绕状分布的数据集。
2.实验结果如下:
n=200,k=3时:
n=200,k=9时:
所以当k较小时,分类数据点的效果就越准确。
densesift原理
一、densesift原理介绍
传统的SIFT算法即Sparse SIFT,不能很好地表征不同类之间的特征差异,达不到所需的分类要求。而Dense SIFT算法,是一种对输入图像进行分块处理,再进行SIFT运算的特征提取过程。Dense SIFT根据可调的参数大小,来适当满足不同分类任务下对图像的特征表征能力。而Sparse SIFT则是对整幅图像的处理,得到一系列特征点(keypoints).
二、实验代码及结果
import numpy as np
from scipy import signal
from matplotlib import pyplot
import matplotlib
Nangles = 8
Nbins = 4
Nsamples = Nbins**2
alpha = 9.0
angles = np.array(range(Nangles))2.0np.pi/Nangles
def gen_dgauss(sigma):
‘’’
generating a derivative of Gauss filter on both the X and Y
direction.//在X和Y方向上生成高斯滤波器的导数。
‘’’
fwid = np.int(2*np.ceil(sigma))
G = np.array(range(-fwid,fwid+1))**2
G = G.reshape((G.size,1)) + G
G = np.exp(- G / 2.0 / sigma / sigma)
G /= np.sum(G)
GH,GW = np.gradient(G)
GH *= 2.0/np.sum(np.abs(GH))
GW *= 2.0/np.sum(np.abs(GW))
return GH,GW
class DsiftExtractor:
‘’’
The class that does dense sift feature extractor.//进行密集筛选的类提取器
Sample Usage:
extractor = DsiftExtractor(gridSpacing,patchSize,[optional params])
feaArr,positions = extractor.process_image(Image)
‘’’
def init(self, gridSpacing, patchSize,
nrml_thres = 1.0,
sigma_edge = 0.8,
sift_thres = 0.2):
‘’’
gridSpacing: the spacing for sampling dense descriptors//密集描述符的采样间隔
patchSize: the size for each sift patch//每个sift patch的尺寸
nrml_thres: low contrast normalization threshold//低对比度归一化阈值
sigma_edge: the standard deviation for the gaussian smoothing//高斯平滑的标准差
before computing the gradient
sift_thres: sift thresholding (0.2 works well based on
Lowe’s SIFT paper)//sift阈值化(0.2基于Lowe’s sift paper效果很好)
‘’’
self.gS = gridSpacing
self.pS = patchSize
self.nrml_thres = nrml_thres
self.sigma = sigma_edge
self.sift_thres = sift_thres
# compute the weight contribution map
sample_res = self.pS / np.double(Nbins)
sample_p = np.array(range(self.pS))
sample_ph, sample_pw = np.meshgrid(sample_p,sample_p)
sample_ph.resize(sample_ph.size)
sample_pw.resize(sample_pw.size)
bincenter = np.array(range(1,Nbins*2,2)) / 2.0 / Nbins * self.pS - 0.5
bincenter_h, bincenter_w = np.meshgrid(bincenter,bincenter)
bincenter_h.resize((bincenter_h.size,1))
bincenter_w.resize((bincenter_w.size,1))
dist_ph = abs(sample_ph - bincenter_h)
dist_pw = abs(sample_pw - bincenter_w)
weights_h = dist_ph / sample_res
weights_w = dist_pw / sample_res
weights_h = (1-weights_h) * (weights_h <= 1)
weights_w = (1-weights_w) * (weights_w <= 1)
# weights is the contribution of each pixel to the corresponding bin center
self.weights = weights_h * weights_w
#pyplot.imshow(self.weights)
#pyplot.show()
class SingleSiftExtractor(DsiftExtractor):
‘’’
The simple wrapper class that does feature extraction, treating
the whole image as a local image patch.//一个简单的封装类,它能把整个图像当作一个局部图像补丁
‘’’
def init(self, patchSize,
nrml_thres = 1.0,
sigma_edge = 0.8,
sift_thres = 0.2):
# simply call the super class init with a large gridSpace
DsiftExtractor.init(self, patchSize, patchSize, nrml_thres, sigma_edge, sift_thres)
if name == ‘main’:
# ignore this. I only use this for testing purpose…
from scipy import misc
extractor = DsiftExtractor(8,16,1)
image = misc.imread(‘C:/Users/qgl/Desktop/articles/test1.png’)
image = np.mean(np.double(image),axis=2)
feaArr,positions = extractor.process_image(image)
#pyplot.hist(feaArr.flatten(),bins=100)
#pyplot.imshow(feaArr[:256])
#pyplot.plot(np.sum(feaArr,axis=0))
pyplot.imshow(feaArr[np.random.permutation(feaArr.shape[0])[:256]])
手势识别
一、手势识别的简介
谈起手势识别技术,由简单粗略的到复杂精细的,大致可以分为三个等级:二维手型识别、二维手势识别、三维手势识别。
在具体讨论手势识别之前,我们有必要先知道二维和三维的差别。二维只是一个平面空间,我们可以用(X坐标,Y坐标)组成的坐标信息来表示一个物体在二维空间中的坐标位置,就像是一幅画出现在一面墙上的位置。三维则在此基础上增加了“深度”(Z坐标)的信息,这是二维所不包含的。这里的“深度”并不是咱们现实生活中所说的那个深度,这个“深度”表达的是“纵深”,理解为相对于眼睛的“远度”也许更加贴切。就像是鱼缸中的金鱼,它可以在你面前上下左右的游动,也可能离你更远或者更近。
前两种手势识别技术,完全是基于二维层面的,它们只需要不含深度信息的二维信息作为输入即可。就像平时拍照所得的相片就包含了二维信息一样,我们只需要使用单个摄像头捕捉到的二维图像作为输入,然后通过计算机视觉技术对输入的二维图像进行分析,获取信息,从而实现手势识别。
而第三种手势识别技术,是基于三维层面的。三维手势识别与二维手势识别的最根本区别就在于,三维手势识别需要的输入是包含有深度的信息,这就使得三维手势识别在硬件和软件两方面都比二维手势识别要复杂得多。对于一般的简单操作,比如只是想在播放视频的时候暂停或者继续放映,二维手势也就足够了。但是对于一些复杂的人机交互,比如玩游戏或者应用在VR(虚拟现实)上,三维手势实在是居家旅行必备、舍我其谁的不二之选。
二、手势识别的代码和结果
function [boxes blobIndIm blobBoxes hierarchy priority] = Image2HierarchicalGrouping(im, sigma, k, minSize, colourType, functionHandles)
% function [boxes blobIndIm blobBoxes hierarchy] = Image2HierarchicalGrouping
% (im, sigma, k, minSize, colourType, functionHandles)
%
% Creates hierarchical grouping from an image
%
% im: Image
% sigma (= 0.8): Smoothing for initial segmentation (Felzenszwalb 2004)
% k (= 100): Threshold for initial segmentation
% minSize (= 100): Minimum size of segments for initial segmentation
% colourType: ColourType in which to do grouping (see Image2ColourSpace)
% functionHandles: Similarity functions which are called. Function
% creates as many hierarchies as there are functionHandles
%
% boxes: N x 4 array with boxes of all hierarchical groupings
% blobIndIm: Index image with the initial segmentation
% blobBoxes: Boxes belonging to the indices in blobIndIm
% hierarchy: M x 1 cell array with hierarchies. M =
% length(functionHandles)
%
% Jasper Uijlings - 2013
% Change colour space
[colourIm imageToSegment] = Image2ColourSpace(im, colourType);
% Get initial segmentation, boxes, and neighbouring blobs
[blobIndIm blobBoxes neighbours] = mexFelzenSegmentIndex(imageToSegment, sigma, k, minSize);
numBlobs = size(blobBoxes,1);
% Skip hierarchical grouping if segmentation results in single region only
if numBlobs == 1
warning(‘Oversegmentation results in a single region only’);
boxes = blobBoxes;
hierarchy = [];
priority = 1; % priority is legacy
return;
end
%%% Calculate histograms and sizes as prerequisite for grouping procedure
% Get colour histogram
[colourHist blobSizes] = BlobStructColourHist(blobIndIm, colourIm);
% Get texture histogram
textureHist = BlobStructTextureHist(blobIndIm, colourIm);
% textureHist = BlobStructTextureHistLBP(blobIndIm, colourIm);
% Allocate memory for complete hierarchy.
blobStruct.colourHist = zeros(size(colourHist,2), numBlobs * 2 - 1);
blobStruct.textureHist = zeros(size(textureHist,2), numBlobs * 2 - 1);
blobStruct.size = zeros(numBlobs * 2 -1, 1);
blobStruct.boxes = zeros(numBlobs * 2 - 1, 4);
% Insert calculated histograms, sizes, and boxes
blobStruct.colourHist(:,1:numBlobs) = colourHist’;
blobStruct.textureHist(:,1:numBlobs) = textureHist’;
blobStruct.size(1:numBlobs) = blobSizes ./ 3;
blobStruct.boxes(1:numBlobs,:) = blobBoxes;
blobStruct.imSize = size(im,1) * size(im,2);
%%% If you want to use original blobs in similarity functions, uncomment
%%% these lines.
% blobStruct.blobs = cell(numBlobs * 2 - 1, 1);
% initialBlobs = SegmentIndices2Blobs(blobIndIm, blobBoxes);
% blobStruct.blobs(1:numBlobs) = initialBlobs;
% Loop over all merging strategies. Perform them one by one.
boxes = cell(1, length(functionHandles)+1);
priority = cell(1, length(functionHandles) + 1);
hierarchy = cell(1, length(functionHandles));
for i=1:length(functionHandles)
[boxes{i} hierarchy{i} blobStructT mergeThreshold] = BlobStruct2HierarchicalGrouping(blobStruct, neighbours, numBlobs, functionHandles{i});
boxes{i} = boxes{i}(numBlobs+1:end,:);
priority{i} = (size(boxes{i}, 1)????1)’;
end
% Also save the initial boxes
i = i+1;
boxes{i} = blobBoxes;
priority{i} = ones(size(boxes{i}, 1), 1) * (size(boxes{1}, 1)+1);
% Concatenate boxes and priorities resulting from the different merging
% strategies
boxes = cat(1, boxes{:});
priority = cat(1, priority{:});
[priority ids] = sort(priority, ‘ascend’);