K-means聚类算法之程序实现-3维像素级分割
最近看了k-means算法,网上这个算法很多都是用来分割二维图片的,所以想尝试用这个算法做一下三维图片的分割。
那么我们先来介绍一下二维图片的分割:
原理是这样的,我们把二维图片中的每一个像素点的值作为一个特征值,如果是彩色图片,那么一个像素点就可以由3个特征值组成,因为有三个颜色通道。
代码:
from scipy.cluster.vq import *
from scipy.misc import imresize
from pylab import *
from PIL import Image
import pdb
steps = 40 # image is divided in steps*steps region
infile = 'E:\dataset\ORI_dataset\ADNI-slice3\cMCI\\278_brain\\46_278_brain.jpg'
im = array(Image.open(infile))
dx = int(im.shape[0] / steps)
dy = int(im.shape[1] / steps)
# compute color features for each region
features = []
#pdb.set_trace()
for x in range(steps):
for y in range(steps):
R = mean(im[x * dx:(x + 1) * dx, y * dy:(y + 1) * dy]) #彩色图片进行三次
#R = mean(im[x * dx:(x + 1) * dx, y * dy:(y + 1) * dy, 0])
#G = mean(im[x * dx:(x + 1) * dx, y * dy:(y + 1) * dy, 1])
#B = mean(im[x * dx:(x + 1) * dx, y * dy:(y + 1) * dy, 2])
#features.append([R, G, B])
features.append(R)
features = array(features, 'f') # make into array
# cluster
centroids, variance = kmeans(features, 2)
code, distance = vq(features, centroids)
# create image with cluster labels
codeim = code.reshape(steps, steps)
codeim = imresize(codeim, im.shape[:2], 'nearest')
figure()
ax1 = subplot(121)
ax1.set_title('Image')
axis('off')
imshow(im)
ax2 = subplot(122)
ax2.set_title('Image after clustering')
axis('off')
imshow(codeim)
show()
结果:
我们的重点是对三维磁共振图片进行分割,那么其实原理和二维的差不多,也是将图片中的像素值作为特征值进行划分。
from scipy.cluster.vq import *
from scipy.misc import imresize
from pylab import *
from PIL import Image
import pdb
import nibabel as nib
from skimage import transform
steps = 182 # image is divided in steps*steps region
infile = 'E:\dataset\ORI_dataset\ADNI-teacher\AD\AD_001.nii\\AD_001.nii'
img=nib.load(infile)
ref_affine = img.affine
img=img.get_data()
img= np.array(img) / 1
#img= transform.resize(img, (32, 40, 32), mode='constant')
dx = int(img.shape[0] / steps)
dy = int(img.shape[2] / steps)
# compute color features for each region
#len(img.shape[1])
all=[]
#pdb.set_trace()
features = []
for i in range(218):
im= img[:, i, :] # 取出一张图像
dic2 = []
for a in im:
dic1 = []
for b in a:
aa=int(b)
dic1.append(aa)
dic2.append(dic1)
im=np.array(dic2)
for x in range(steps):
for y in range(steps):
R = im[x ,y]
features.append([R])
features = array(features, 'f') # make into array
# cluster
centroids, variance = kmeans(features, 2)
code, distance = vq(features, centroids)
# create image with cluster labels
codeim = code.reshape(218,steps,steps)
#codeim = codeim.reshape(steps,40,steps)
heng5 = nib.Nifti1Image(codeim , ref_affine)
nib.save(heng5, 'E:\dataset\ORI_dataset\ADNI-teacher\AD\AD_001.nii\\112.nii')
一定要注意像素点重新分配的位置,因为我的图片是(182,218,182),我的像素点读取相当于是用(182,182)大小的图片进行了218次堆叠,每次先遍历一张图片,一共遍历218次,所以最后reshape回来就是(218,182,182)。如果你的reshape是(182,218,182),那么结果就三维展示会是这样:
正确结果:右边为原始图像,左边为分割之后的图像,只是这时候像素点位置出现变化。