基于SURF算法的积分图像的多目标识别和锁定

基于SURF算法的积分图像的多目标识别和锁定

1 、积分图

2 、 Hessian 矩阵行列式近似计算

3 、尺度空间表达

4 、特征点描述和匹配

 

SURF算法文章链接:https://pan.baidu.com/s/1VJtG_tM0qsDzvv9V5Qneeg 提取码: tne3

SURF原版算法英文版链接: https://pan.baidu.com/s/1bTyCk8HkSQtn6IShxngwGA 提取码: 6kxw

文章后有原代码,会不断更新....

全部用类c写的,而且特别不用c的指针,不用任何类,不用任何dll,方便移植任意单片机(arduino系列,STM系列等)

基于SURF算法的积分图像的多目标识别和锁定

9月7日

1、识别速度终于降到0.5秒下了,可以实用了....,该算法实在是太好了,Hessian 矩阵构建三角体分层图,加入预判像素变化的方向,计算用时极短。

2、支持2048*1080以上的高分辨率图像识别,识别速度极快,也可做为图像流识别引擎。

9月10日,算法所用时间被我降低到0.3秒以下。

一、积分图像
    积分图像的概念是由Viola和Jones提出的。积分图像中任意一点(i,j)的值为原图像左上角到任意点(i,j)相应的对焦区域的灰度值的总和,其数学公式如图1所示

基于SURF算法的积分图像的多目标识别和锁定:基于SURF算法的积分图像的多目标识别和锁定

那么,当我们想要计算图片一个区域的积分,就只需计算这个区域的四个顶点在积分图像里的值,便可以通过2步加法和2步减法计算得出,其数学公式如下:

基于SURF算法的积分图像的多目标识别和锁定

代码示例:调用函数IntegralImage.FromImage ()即可生成积分图像

参数1:bitmap变量

bitmap1 = Bitmap.FromFile(fName) as Bitmap;

iimg1 = IntegralImage.FromImage(bitmap1); //生成实时图积分图像

/

 

       const float cR = .2989f;
        const float cG = .5870f;
        const float cB = .1140f;

        float[,]  Matrix1;                                    
        public int Width, Height;

        public float this[int y, int x]
        {
            get { return Matrix1[y, x]; }
            set { Matrix1[y, x] = value; }
        }

        private IntegralImage(int width, int height)
        {
            this.Width = width;
            this.Height = height;

            this.Matrix1 = new float[height, width];
        }

  public  IntegralImage FromImage(Bitmap image)   //转积分图形
        {
            IntegralImage pic = new IntegralImage(image.Width, image.Height);
             float rowsum = 0;
             BitmapData dataIn = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
            
            unsafe    //这里测试时,用了指针,可以自己改为不用,就用bitmap的GetPixel(x, y)
            {
                byte* pIn = (byte*)(dataIn.Scan0.ToPointer()); 
                for (int y = 0; y < dataIn.Height; y++)
                {
                    rowsum = 0;
                    for (int x = 0; x < dataIn.Width; x++)
                    {
                        int cb = (byte)( pIn[0]);
                        int cg = (byte)(pIn[1]);
                        int cr = (byte)(pIn[2]);

                        // 0 1 2代表的次序是B G R
                        rowsum += (cR * cr + cG * cg + cB * cb) / 255f;
                        // integral image is rowsum + value above   
                        if (y == 0)
                            pic[0, x] = rowsum;
                        else
                            pic[y, x] = rowsum + pic[y - 1, x];

                        pIn += 3;
                    }
                    pIn += dataIn.Stride - dataIn.Width * 3; 
                }
            }
            image.UnlockBits(dataIn); 
            return pic;
        }

  public float BoxIntegral(int row, int col, int rows, int cols) //给定四个点,返回区域的灰度值
        {
            // The subtraction by one for row/col is because row/col is inclusive.
            int r1 = Math.Min(row, Height) - 1;
            int c1 = Math.Min(col, Width) - 1;
            int r2 = Math.Min(row + rows, Height) - 1;
            int c2 = Math.Min(col + cols, Width) - 1;

            float A = 0, B = 0, C = 0, D = 0;
            if (r1 >= 0 && c1 >= 0) A = Matrix1[r1, c1];
            if (r1 >= 0 && c2 >= 0) B = Matrix1[r1, c2];
            if (r2 >= 0 && c1 >= 0) C = Matrix1[r2, c1];
            if (r2 >= 0 && c2 >= 0) D = Matrix1[r2, c2];

            return Math.Max(0, A - B - C + D);
        }

        /// <summary>
        /// Get Haar Wavelet X repsonse
        /// </summary>
        /// <param name="row"></param>
        /// <param name="column"></param>
        /// <param name="size"></param>
        /// <returns></returns>
        public float HaarX(int row, int column, int size)
        {
            return BoxIntegral(row - size / 2, column, size, size / 2)
              - 1 * BoxIntegral(row - size / 2, column - size / 2, size, size / 2);
        }

        /// <summary>
        /// Get Haar Wavelet Y repsonse
        /// </summary>
        /// <param name="row"></param>
        /// <param name="column"></param>
        /// <param name="size"></param>
        /// <returns></returns>
        public float HaarY(int row, int column, int size)
        {
            return BoxIntegral(row, column - size / 2, size / 2, size)
              - 1 * BoxIntegral(row - size / 2, column - size / 2, size / 2, size);
        }

二、Hession矩阵探测器
1、斑点检测
    斑点:与周围有着颜色和灰度差别的区域。
    在一个一维信号中,让它和高斯二阶导数进行卷积,也就是拉普拉斯变换,那么在信号的边缘处就会出现过零点,如下图所示:

基于SURF算法的积分图像的多目标识别和锁定

高斯拉普拉斯Log探测器的响应值就是在衡量图像的相似性,如下图是一个图像的高斯拉普拉斯变换的三维图和灰度图显示,在图像中的斑点尺寸与高斯拉普拉斯函数
的形状趋于一致时,图像的拉普拉斯响应抵达最大。

基于SURF算法的积分图像的多目标识别和锁定

Hession矩阵就是利用二阶微分来进行斑点检测,其矩阵如下:

基于SURF算法的积分图像的多目标识别和锁定

2、Hession矩阵与盒子滤波器

    在图像中的Hession矩阵如下:

基于SURF算法的积分图像的多目标识别和锁定

(二阶导数计算方式  d2L(x)/dx2=(L(x+1)-L(x))-(L(x)-L(x-1))=2*L(x)+L(x+1)+L(x-1),其中L(x)=g(h(x))  ,h(x)为原始图像的灰度值,L(x)是将h(x)高斯滤波处理后的图像。)

 

他们的三维图和灰度图如下所示

基于SURF算法的积分图像的多目标识别和锁定

由此,我们把模板(图像中的区域)与图像的卷积运算转化为盒子滤波器的运算,如下图所示:

基于SURF算法的积分图像的多目标识别和锁定

3、hession矩阵行列式的简化

当我们用sigma = 1.2.的高斯二阶微分滤波,模板尺寸为9X9最为最小的尺度空间值对图像进行滤波和斑点检测,Hession矩阵的行列式可做如下简化:

基于SURF算法的积分图像的多目标识别和锁定

常数C不影响极值点的比较,所以最终简化式如下,这也是SURF论文里面Hession响应值计算公式的来源:

基于SURF算法的积分图像的多目标识别和锁定

另外,响应值还要根据滤波器大小进行归一化处理,以保证任意大小滤波器的F范数是统一的。0.9^2是滤波器响应的相关权重w是为了平衡Hessian行列式的表示式。这是为了保持高斯核与近似高斯核的一致性。理论上来说对于不同的σ的值和对应尺寸的模板尺寸,w值是不同的,但为了简化起见,可以认为它是同一个常数。 使用近似的Hessian矩阵行列式来表示图像中某一点x处的斑点响应值,遍历图像中所有的像元点,便形成了在某一尺度下琉璃点检测的响应图像。使用不同的模板尺寸,

便形成了多尺度斑点响应的金字塔图像,利用这一金字塔图像,就可以进行斑点响应极值点的搜索。

代码示例:函数 buildResponseLayerResponseLayer rl, IntegralImage img

参数1:图层,参数2:积分图变量

public void buildResponseLayer(ResponseLayer rl, IntegralImage img)  //Hession响应图

        {

            int step = rl.step;                      // step size for this filter

            int b = (rl.filter - 1) / 2 + 1;         // border for this filter

            int l = rl.filter / 3;                   // lobe for this filter (filter size / 3)

            int w = rl.filter;                       // filter size

            float inverse_area = 1f / (w * w);       // normalisation factor

            float Dxx, Dyy, Dxy;

 

            for (int r, c, ar = 0, index = 0; ar < rl.height; ++ar)

            {

                for (int ac = 0; ac < rl.width; ++ac, index++)

                {

                    // get the image coordinates

                    r = ar * step;

                    c = ac * step;

 

                    // Compute response components

                    Dxx = img.BoxIntegral(r - l + 1, c - b, 2 * l - 1, w)

                        - img.BoxIntegral(r - l + 1, c - l / 2, 2 * l - 1, l) * 3;

                    Dyy = img.BoxIntegral(r - b, c - l + 1, w, 2 * l - 1)

                        - img.BoxIntegral(r - l / 2, c - l + 1, l, 2 * l - 1) * 3;

                    Dxy = +img.BoxIntegral(r - l, c + 1, l, l)

                          + img.BoxIntegral(r + 1, c - l, l, l)

                          - img.BoxIntegral(r - l, c - l, l, l)

                          - img.BoxIntegral(r + 1, c + 1, l, l);

 

                    // Normalise the filter responses with respect to their size

                    Dxx *= inverse_area;

                    Dyy *= inverse_area;

                    Dxy *= inverse_area;

 

                    // Get the determinant of hessian response & laplacian sign

                    rl.responses[index] = (Dxx * Dyy - 0.81f * Dxy * Dxy);

                    rl.laplacian[index] = (byte)(Dxx + Dyy >= 0 ? 1 : 0);

                }

            }

        }

三、3D非极大值抑制

   1、尺度金字塔构造

    在SURF中,采用不断增大盒子滤波器模板尺寸与积分图像求取Hession矩阵响应,然后在响应图像上采用3D非极大值抑制,求取各种不同尺度

的斑点,以下是两种不同的金字塔,SURF的金字塔属于第二种:

基于SURF算法的积分图像的多目标识别和锁定

SURF中采用9X9尺寸的滤波器作为起始滤波器,之后的滤波器尺寸可由以下公式计算得出:

 

基于SURF算法的积分图像的多目标识别和锁定

octave、interval在公式中都是从1开始,也就是当第0组第0层时,在公式中octave = 1, interval = 1。

    采用这种方式来定义滤波器尺寸的理由如下;

基于SURF算法的积分图像的多目标识别和锁定

滤波器响应长度、滤波器尺寸、组索引O、层索引S

尺度sigma之间的关系如下:

基于SURF算法的积分图像的多目标识别和锁定

 采用类似的方法来处理其他几组的模板序列。其方法是将滤波器尺寸增加量翻倍(6,12,24,38)。这样,可以得到第二组的滤波器尺寸,它们分别为15,27,39,51。第三组的滤波器尺寸为27,51,75,99。如果原始图像的尺寸仍然大于对应的滤波器尺寸,尺度空间的分析还可以进行第四组,其对应的模板尺寸分别为51,99,147,195。下图显示了第一组至第三组的滤波器尺寸变化: 

基于SURF算法的积分图像的多目标识别和锁定

在通常尺度分析情况下,随着尺度的增大,被检测到的斑点数量迅速衰减。所以一般进行3-4组就可以了,与此同时,为了减少运算量,提高计算的速度,可以考虑在滤波时,将采样间隔设为2。
    2、为了在图像及不同尺寸中定位兴趣点,我们用了3×3×3邻域非最大值抑制:
所有小于预设极值的取值都被丢弃,增加极值使检测到的特征点数量减少,最终只有几个特征最强点会被检测出来。检测过程中使用与该尺度层图像解析度相对应大小的滤波器进行检测,以3×3的滤波器为例,该尺度层图像中9个像素点之一图2检测特征点与自身尺度层中其余8个点和在其之上及之下的两个尺度层9个点进行比较,共26个点,图中标记‘x’的像素点的特征值若大于周围像素则可确定该点为该区域的特征点。

基于SURF算法的积分图像的多目标识别和锁定

3、局部极大值精确定位

    采用3维线性插值法得到亚像素级的特征点,同时也去掉那些值小于一定阈值的点。

 代码示例:先定义两个关键的结构,然后生成金子塔的每层

public struct IPoint   //定义兴趣点结构

        {

            public float x, y;

            public float scale;  //比例

            public float response;

            public float orientation;  //方向

            public int laplacian;  //拉普拉斯算子

            public int descriptorLength;  //描述长度

            public float[] descriptor;

        }

public struct ResponseLayer  //定义响应层结构

        {

            public int width, height, step, filter;    //图的宽、长,步长,过滤参数

            public float[] responses;

            public byte[] laplacian;  //拉普拉斯

        }

 private ResponseLayer[] responseMap;  //整个金子塔响应层 数组

        public IPoint[] ipts;  //兴趣点列表

public IPoint[]  getIpoints(float thresh, int octaves, int init_sample, IntegralImage img) //特征点返回函数

        {  

           

            //参数1 系数, 参数2 金子塔层数, 参数3 生成金子塔层数的采样数, 参数4 积分图变量

            // filter index map

            int[,] filter_map = { { 0, 1, 2, 3 }, { 1, 3, 4, 5 }, { 3, 5, 6, 7 }, { 5, 7, 8, 9 }, { 7, 9, 10, 11 } };

            // Clear the vector of exisiting ipts

            if (ipts == null)  ipts = new IPoint[200];

            else

                ipts = new IPoint[200];

               // ipts.Clear();

           

           

            // Build the response map

            buildResponseMap(img, octaves, init_sample);  //调用函数生成金子塔层

            // Get the response layers

            ResponseLayer b, m, t;

            int szjs1=0; //ipts 数组计数

            for (int o = 0; o < octaves; ++o)

                for (int i = 0; i <= 1; ++i)

                {

                    b = responseMap[filter_map[o, i]];

                    m = responseMap[filter_map[o, i + 1]];

                    t = responseMap[filter_map[o, i + 2]];

                    // loop over middle response layer at density of the most

                    // sparse layer (always top), to find maxima across scale and space

                    for (int r = 0; r < t.height; ++r)

                    {

                        for (int c = 0; c < t.width; ++c)

                        {

                            if (isExtremum(r, c, t, m, b, thresh))

                            {

                                interpolateExtremum(r, c, t, m, b,szjs1); //插入极值

                                 //图像及不同尺寸中定位兴趣点,我们用了3×3×3邻域非最大值抑制,在3层图中比较。

                                szjs1 = szjs1 + 1;

                            }

                        }

                    }

                }

            return ipts;

        }

void buildResponseMap(IntegralImage img, int octaves, int init_sample)  //生成金子塔层
        {
            // Calculate responses for the first 4 octaves:
            // Oct1: 9,  15, 21, 27
            // Oct2: 15, 27, 39, 51
            // Oct3: 27, 51, 75, 99
            // Oct4: 51, 99, 147,195
            // Oct5: 99, 195,291,387

            // Deallocate memory and clear any existing response layers
            if (responseMap == null)
            {
                responseMap = new ResponseLayer[50];
            }
            else
            {
                for (int i2 = 0; i2 < responseMap.Length ; i2++)   //考虑是否清空数组?
                {
                   responseMap[i2].width = 0;
                   responseMap[i2].height = 0;
                    responseMap[i2].step = 0;
                    responseMap[i2].filter = 0;
                  for (int j = 0; j < responseMap[i2].width * responseMap[i2].height ; j++)
                    {
                        responseMap[i2].responses[j] = 0;
                        responseMap[i2].laplacian[j] = 0;
                    }
                }
            }
            // Get image attributes
            int w = (img.Width / init_sample);
            int h = (img.Height / init_sample);
            int s = (init_sample);


          
                // Calculate approximated determinant of hessian values
                if (octaves >= 1)
                {

                    responseMap[0] = (ResponseLayer1(w, h, s, 9));
                    responseMap[1] = (ResponseLayer1(w, h, s, 15));
                    responseMap[2] = (ResponseLayer1(w, h, s, 21));
                    responseMap[3] = (ResponseLayer1(w, h, s, 27));
                }

                if (octaves >= 2)
                {
                    responseMap[4] = (ResponseLayer1(w / 2, h / 2, s * 2, 39));
                    responseMap[5] = (ResponseLayer1(w / 2, h / 2, s * 2, 51));
                }

                if (octaves >= 3)
                {
                    responseMap[6] = (ResponseLayer1(w / 4, h / 4, s * 4, 75));
                    responseMap[7] = (ResponseLayer1(w / 4, h / 4, s * 4, 99));
                }

                if (octaves >= 4)
                {
                    responseMap[8] = (ResponseLayer1(w / 8, h / 8, s * 8, 147));
                    responseMap[9] = (ResponseLayer1(w / 8, h / 8, s * 8, 195));
                }

                if (octaves >= 5)
                {
                    responseMap[10] = (ResponseLayer1(w / 16, h / 16, s * 16, 291));
                    responseMap[11] = (ResponseLayer1(w / 16, h / 16, s * 16, 387));
                }
         
            // Extract responses from the image
            for (int i = 0; i < responseMap.Count (); ++i)
            {
                buildResponseLayer(responseMap[i], img);
            }
        }

public ResponseLayer ResponseLayer1(int width, int height, int step, int filter)

        {

            ResponseLayer a;

            a.width = width;

            a.height = height;

            a.step = step;

            a.filter = filter;

            a.responses = new float[width * height];

            a.laplacian = new byte[width * height];

            return a;

        }

public void interpolateExtremum(int r, int c, ResponseLayer t, ResponseLayer m, ResponseLayer b,int szjs)  //插入极值
        {
       

   Double[,] D1;
              Double[,] H1;
              Double[,] Hi;
              Double[,] Of;
              Double[,] zz;
              Double[,] zz1;


            Of = new Double[3,3];
            H1 = new Double[3, 3];
            Hi = new Double[3, 3];
            D1 = new Double[3, 3];

            zz =  new Double[3, 3];   //逆矩阵中间变量
            zz1 = new Double[3, 3];
            /*    Matrix D = Matrix.Create(BuildDerivative(r, c, t, m, b));
                Matrix H = Matrix.Create(BuildHessian(r, c, t, m, b));
                Matrix Hi = H.Inverse();
                Matrix Of = -1 * Hi * D;  */

            D1 = (BuildDerivative(r, c, t, m, b));
            H1 = (BuildHessian(r, c, t, m, b));

            

            for (int m1 = 0; m1 < 3; m1++)   //求H1的逆矩阵(用伴随矩阵法)
            {
                for (int m2 = 0; m2 < 3; m2++)
                {
                    
                        zz[m1, m2] = H1[m2, m1]*Math.Pow(-1,(m1+1+m2+1));
                  
                }
             }
            double ad=0;
            double bc=0;
            

            for (int m1 = 0; m1 < 3; m1++)   //求H1的逆矩阵
            {
                for (int m2 = 0; m2 < 3; m2++)
                {
                    if (m1 < 2 & m2<2)
                    {
                        ad = H1[m1, m2] * H1[m1 + 1, m2 + 1];
                        bc = H1[m1, m2 + 1] * H1[m1 + 1, m2];
                    } 
                    zz1[m1, m2] = zz[m1,m2]/Math.Abs (ad- bc);
               
                }
            }

            for (int m1 = 0; m1 < 3; m1++)
            {
                for (int m2 = 0; m2 < 3; m2++)
                {
                    Of[m1,m2] = zz1[m1,m2] * D1[m1,0] * -1;
                } 
            }

            // get the offsets from the interpolation
            double[] O = { Of[0, 0], Of[1, 0], Of[2, 0] };

            // get the step distance between filters
            int filterStep = (m.filter - b.filter);

            // If point is sufficiently close to the actual extremum
            if (Math.Abs(O[0]) < 0.5f && Math.Abs(O[1]) < 0.5f && Math.Abs(O[2]) < 0.5f)
            {
                IPoint ipt = new IPoint();
                ipt.x = (float)((c + O[0]) * t.step);
                ipt.y = (float)((r + O[1]) * t.step);
                ipt.scale = (float)((0.1333f) * (m.filter + O[2] * filterStep));
                ipt.laplacian = (int)(getLaplacian( m,r, c, t));
                ipt.orientation = 0;               //考虑?
                ipt.descriptor = new float [64];   // 考虑?
                ipt.descriptorLength =64;            //考虑?

                ipts[szjs]=ipt; //

            }


        }


       
        public double[,] BuildDerivative(int r, int c, ResponseLayer t, ResponseLayer m, ResponseLayer b)
        {
            double dx, dy, ds;

            dx = (getResponse(m, r, c + 1, t) - getResponse(m, r, c - 1, t)) / 2f;
            dy = (getResponse(m, r + 1, c, t) - getResponse(m, r - 1, c, t)) / 2f;
            ds = (getResponse(t, r, c) - getResponse(b, r, c, t)) / 2f;

            double[,] D = { { dx }, { dy }, { ds } };
            return D;
        }


       
        public double[,] BuildHessian(int r, int c, ResponseLayer t, ResponseLayer m, ResponseLayer b)
        {
            double v, dxx, dyy, dss, dxy, dxs, dys;

            v = getResponse(m, r, c, t);
            dxx = getResponse(m, r, c + 1, t) + getResponse(m, r, c - 1, t) - 2 * v;
            dyy = getResponse(m, r + 1, c, t) + getResponse(m, r - 1, c, t) - 2 * v;
            dss = getResponse(t, r, c) + getResponse(b, r, c, t) - 2 * v;
            dxy = (getResponse(m, r + 1, c + 1, t) - getResponse(m, r + 1, c - 1, t) -
                    getResponse(m, r - 1, c + 1, t) + getResponse(m, r - 1, c - 1, t)) / 4f;
            dxs = (getResponse(t, r, c + 1) - getResponse(t, r, c - 1) -
                    getResponse(b, r, c + 1, t) + getResponse(b, r, c - 1, t)) / 4f;
            dys = (getResponse(t, r + 1, c) - getResponse(t, r - 1, c) -
                    getResponse(b, r + 1, c, t) + getResponse(b, r - 1, c, t)) / 4f;

            double[,] H = new double[3, 3];
            H[0, 0] = dxx;
            H[0, 1] = dxy;
            H[0, 2] = dxs;
            H[1, 0] = dxy;
            H[1, 1] = dyy;
            H[1, 2] = dys;
            H[2, 0] = dxs;
            H[2, 1] = dys;
            H[2, 2] = dss;
            return H;
        }

/

public bool isExtremum(int r, int c, ResponseLayer t, ResponseLayer m, ResponseLayer b, float thresh)
        {
            // bounds check
            int layerBorder = (t.filter + 1) / (2 * t.step);
            if (r <= layerBorder || r >= t.height - layerBorder || c <= layerBorder || c >= t.width - layerBorder)
                return false;

            // check the candidate point in the middle layer is above thresh 
            float candidate = getResponse(m,r, c, t);
            if (candidate < thresh)
                return false;

            for (int rr = -1; rr <= 1; ++rr)
            {
                for (int cc = -1; cc <= 1; ++cc)
                {
                    // if any response in 3x3x3 is greater candidate not maximum
                    if (getResponse(t, r + rr, c + cc) >= candidate ||
                      ((rr != 0 || cc != 0) && getResponse(m,r + rr, c + cc, t) >= candidate) ||
                      getResponse(b, r + rr, c + cc, t) >= candidate)
                    {
                        return false;
                    }
                }
            }

            return true;
        }
 

 public float getResponse(ResponseLayer b, int row, int column)
        {
            return b.responses[row * b.width + column];
        }
        public float getResponse(ResponseLayer c,int row, int column, ResponseLayer src)
        {
            int scale = c.width / src.width;
            return c.responses[(scale * row) * c.width + (scale * column)];
        }

        public byte getLaplacian(ResponseLayer d ,int row, int column)
        {
            return d.laplacian[row * d.width + column];
        }

        public byte getLaplacian(ResponseLayer e,int row, int column, ResponseLayer src)
        {
            int scale = e.width / src.width;
            return e.laplacian[(scale * row) * e.width + (scale * column)];
        }

四、特征点描述符

1、特征点方向分配

 

1、特征点方向分配

    以特征点为中心,计算半径为6s(S为特征点所在的尺度值)的邻域内的点在x、y方向的Haar小波(Haar小波边长取4s)响应,Harr小波

模板如图所示:

基于SURF算法的积分图像的多目标识别和锁定

 

计算出图像在哈尔小波的x和y方向上的响应值之后,对两个值进行因子为2S的高斯加权,加权后的值分别表示在水平和垂直方向上的方向分量。

Harr特征值反应了图像灰度变化的情况,那么这个主方向就是描述那些灰度变化特别剧烈的区域方向。

    接着,以特征点为中心,张角为π/3的扇形滑动,计算窗口内的Harr小波响应值dx、dy的累加:

基于SURF算法的积分图像的多目标识别和锁定

 扇形窗口的滑动如图所示:

基于SURF算法的积分图像的多目标识别和锁定

代码示例:

 public void DescribeInterestPoints(IPoint[] ipts, bool upright, bool extended, IntegralImage img) //描述兴趣点
        {
            if (ipts.Count()== 0) return;
            //this.img = img;
           


            // float orientation1;   //调用返回的值

            for (int i=0;i< ipts.Length ; i++)
            {
                if (extended)
                {
                    ipts[i].descriptorLength  = 128;
                }
                else
                {
                    ipts[i].descriptorLength = 64;
                }

                if (!upright) ipts[i].orientation = GetOrientation(ipts[i], img);
                // Extract SURF descriptor

                ipts[i]=GetDescriptor(ipts[i], upright, extended, img);

            } 

        
        }


        /// <summary>
        /// Determine dominant orientation for InterestPoint
        /// </summary>
        /// <param name="ip"></param>
  public float[,] gauss25 = new float[7, 7] {
      
           {0.02350693969273f,0.01849121369071f,0.01239503121241f,0.00708015417522f,0.00344628101733f,0.00142945847484f,0.00050524879060f},
      {0.02169964028389f,0.01706954162243f,0.01144205592615f,0.00653580605408f,0.00318131834134f,0.00131955648461f,0.00046640341759f},
      {0.01706954162243f,0.01342737701584f,0.00900063997939f,0.00514124713667f,0.00250251364222f,0.00103799989504f,0.00036688592278f},
      {0.01144205592615f,0.00900063997939f,0.00603330940534f,0.00344628101733f,0.00167748505986f,0.00069579213743f,0.00024593098864f},
      {0.00653580605408f,0.00514124713667f,0.00344628101733f,0.00196854695367f,0.00095819467066f,0.00039744277546f,0.00014047800980f},
      {0.00318131834134f,0.00250251364222f,0.00167748505986f,0.00095819467066f,0.00046640341759f,0.00019345616757f,0.00006837798818f},
      {0.00131955648461f,0.00103799989504f,0.00069579213743f,0.00039744277546f,0.00019345616757f,0.00008024231247f,0.00002836202103f}
    };

 

       

public float GetOrientation(IPoint ip, IntegralImage img)  //特征点方向分配
        {
            const byte Responses = 109;
            float[] resX = new float[Responses];
            float[] resY = new float[Responses];
            float[] Ang = new float[Responses];
            int idx = 0;
            int[] id = { 6, 5, 4, 3, 2, 1, 0, 1, 2, 3, 4, 5, 6 };

            // Get rounded InterestPoint data
            int X = (int)Math.Round(ip.x, 0);
            int Y = (int)Math.Round(ip.y, 0);
            int S = (int)Math.Round(ip.scale, 0);

            // calculate haar responses for points within radius of 6*scale
            for (int i = -6; i <= 6; ++i)
            {
                for (int j = -6; j <= 6; ++j)
                {
                    if (i * i + j * j < 36)
                    {
                        float gauss = gauss25[id[i + 6], id[j + 6]];
                        resX[idx] = gauss * img.HaarX(Y + j * S, X + i * S, 4 * S);
                        resY[idx] = gauss * img.HaarY(Y + j * S, X + i * S, 4 * S);
                        Ang[idx] = (float)GetAngle(resX[idx], resY[idx]);
                        ++idx;
                    }
                }
            }

            // calculate the dominant direction 
            float sumX, sumY, max = 0, orientation = 0;
            float ang1, ang2;
            float pi = (float)Math.PI;

            // loop slides pi/3 window around feature point
            for (ang1 = 0; ang1 < 2 * pi; ang1 += 0.15f)
            {
                ang2 = (ang1 + pi / 3f > 2 * pi ? ang1 - 5 * pi / 3f : ang1 + pi / 3f);
                sumX = sumY = 0;

                for (int k = 0; k < Responses; ++k)
                {
                    // determine whether the point is within the window
                    if (ang1 < ang2 && ang1 < Ang[k] && Ang[k] < ang2)
                    {
                        sumX += resX[k];
                        sumY += resY[k];
                    }
                    else if (ang2 < ang1 &&
                      ((Ang[k] > 0 && Ang[k] < ang2) || (Ang[k] > ang1 && Ang[k] < pi)))
                    {
                        sumX += resX[k];
                        sumY += resY[k];
                    }
                }

                // if the vector produced from this window is longer than all 
                // previous vectors then this forms the new dominant direction
                if (sumX * sumX + sumY * sumY > max)
                {
                    // store largest orientation
                    max = sumX * sumX + sumY * sumY;
                    orientation = (float)GetAngle(sumX, sumY);
                }
            }

            // assign orientation of the dominant response vector
            //ip.orientation  =  orientation;
            return orientation; 
        }

///

2、特征点特征矢量的生成

    以特征点为中心,沿主方向将20SX20S的图像划分为4X4个子块,每个子块用尺寸2S的Harr模板进行响应值计算,并统计每个子块中

基于SURF算法的积分图像的多目标识别和锁定

这样就有4X4X4=64维的特征数据。如下图所示:

基于SURF算法的积分图像的多目标识别和锁定

在计算这个矩形区域时并不是先把它旋转到主方向,而是先计算出每一个点的Harr响应值dx、dy并高斯加权处理后,把dx、dy进行旋转变换,计算

公式如下:

基于SURF算法的积分图像的多目标识别和锁定

基于SURF算法的积分图像的多目标识别和锁定

基于SURF算法的积分图像的多目标识别和锁定

        /// <summary>
        /// Construct descriptor vector for this interest point
        /// </summary>
        /// <param name="bUpright"></param>
        public IPoint  GetDescriptor(IPoint ip, bool bUpright, bool bExtended, IntegralImage img)
        {
            int sample_x, sample_y, count = 0;
            int i = 0, ix = 0, j = 0, jx = 0, xs = 0, ys = 0;
            float dx, dy, mdx, mdy, co, si;
            float dx_yn, mdx_yn, dy_xn, mdy_xn;
            float gauss_s1 = 0f, gauss_s2 = 0f;
            float rx = 0f, ry = 0f, rrx = 0f, rry = 0f, len = 0f;
            float cx = -0.5f, cy = 0f; //Subregion centers for the 4x4 gaussian weighting

            // Get rounded InterestPoint data
            int X = (int)Math.Round(ip.x, 0);
            int Y = (int)Math.Round(ip.y, 0);
            int S = (int)Math.Round(ip.scale, 0);

            // Allocate descriptor memory
            //SetDescriptorLength(ip,64);

            ip.descriptorLength = 64;
            ip.descriptor = new float[64];

            for (int i11 = 0; i11 < ip.descriptor.Count(); i11++)
            {
                ip.descriptor[i11] = 0;
            }

            if (bUpright)
            {
                co = 1;
                si = 0;
            }
            else
            {
                co = (float)Math.Cos(ip.orientation);
                si = (float)Math.Sin(ip.orientation);
            }

            //Calculate descriptor for this interest point
            i = -8;
            while (i < 12)
            {
                j = -8;
                i = i - 4;

                cx += 1f;
                cy = -0.5f;

                while (j < 12)
                {
                    cy += 1f;

                    j = j - 4;

                    ix = i + 5;
                    jx = j + 5;

                    dx = dy = mdx = mdy = 0f;
                    dx_yn = mdx_yn = dy_xn = mdy_xn = 0f;

                    xs = (int)Math.Round(X + (-jx * S * si + ix * S * co), 0);
                    ys = (int)Math.Round(Y + (jx * S * co + ix * S * si), 0);

                    // zero the responses
                    dx = dy = mdx = mdy = 0f;
                    dx_yn = mdx_yn = dy_xn = mdy_xn = 0f;

                    for (int k = i; k < i + 9; ++k)
                    {
                        for (int l = j; l < j + 9; ++l)
                        {
                            //Get coords of sample point on the rotated axis
                            sample_x = (int)Math.Round(X + (-l * S * si + k * S * co), 0);
                            sample_y = (int)Math.Round(Y + (l * S * co + k * S * si), 0);

                            //Get the gaussian weighted x and y responses
                            gauss_s1 = Gaussian(xs - sample_x, ys - sample_y, 2.5f * S);
                            rx = (float)img.HaarX(sample_y, sample_x, 2 * S);
                            ry = (float)img.HaarY(sample_y, sample_x, 2 * S);

                            //Get the gaussian weighted x and y responses on rotated axis
                            rrx = gauss_s1 * (-rx * si + ry * co);
                            rry = gauss_s1 * (rx * co + ry * si);


                            if (bExtended)
                            {
                                // split x responses for different signs of y
                                if (rry >= 0)
                                {
                                    dx += rrx;
                                    mdx += Math.Abs(rrx);
                                }
                                else
                                {
                                    dx_yn += rrx;
                                    mdx_yn += Math.Abs(rrx);
                                }

                                // split y responses for different signs of x
                                if (rrx >= 0)
                                {
                                    dy += rry;
                                    mdy += Math.Abs(rry);
                                }
                                else
                                {
                                    dy_xn += rry;
                                    mdy_xn += Math.Abs(rry);
                                }
                            }
                            else
                            {
                                dx += rrx;
                                dy += rry;
                                mdx += Math.Abs(rrx);
                                mdy += Math.Abs(rry);
                            }
                        }
                    }

                    //Add the values to the descriptor vector
                    gauss_s2 = Gaussian(cx - 2f, cy - 2f, 1.5f);
                   
                        ip.descriptor[count++] = dx * gauss_s2;
                        ip.descriptor[count++] = dy * gauss_s2;
                        ip.descriptor[count++] = mdx * gauss_s2;
                        ip.descriptor[count++] = mdy * gauss_s2;
                 
                    // add the extended components
                    if (bExtended)
                    {
                        ip.descriptor[count++] = dx_yn * gauss_s2;
                        ip.descriptor[count++] = dy_xn * gauss_s2;
                        ip.descriptor[count++] = mdx_yn * gauss_s2;
                        ip.descriptor[count++] = mdy_xn * gauss_s2;
                    }

                    len += (dx * dx + dy * dy + mdx * mdx + mdy * mdy
                            + dx_yn + dy_xn + mdx_yn + mdy_xn) * gauss_s2 * gauss_s2;

                    j += 9;
                }
                i += 9;
            }

            //Convert to Unit Vector
            len = (float)Math.Sqrt((double)len);
            if (len > 0)
            {
                for (int d = 0; d < ip.descriptorLength; ++d)
                {
                    ip.descriptor[d] /= len;
                }
            }
            return ip;
        }


        /// <summary>
        /// Get the angle formed by the vector [x,y]
        /// </summary>
        /// <param name="X"></param>
        /// <param name="Y"></param>
        /// <returns></returns>
        double GetAngle(float X, float Y)
        {
            if (X >= 0 && Y >= 0)
                return Math.Atan(Y / X);
            else if (X < 0 && Y >= 0)
                return Math.PI - Math.Atan(-Y / X);
            else if (X < 0 && Y < 0)
                return Math.PI + Math.Atan(Y / X);
            else if (X >= 0 && Y < 0)
                return 2 * Math.PI - Math.Atan(-Y / X);

            return 0;
        }


        /// <summary>
        /// Get the value of the gaussian with std dev sigma
        /// at the point (x,y)
        /// </summary>
        /// <param name="x"></param>
        /// <param name="y"></param>
        /// <param name="sig"></param>
        /// <returns></returns>
        float Gaussian(int x, int y, float sig)
        {
            float pi = (float)Math.PI;
            return (1f / (2f * pi * sig * sig)) * (float)Math.Exp(-(x * x + y * y) / (2.0f * sig * sig));
        }


        /// <summary>
        /// Get the value of the gaussian with std dev sigma
        /// at the point (x,y)
        /// </summary>
        /// <param name="x"></param>
        /// <param name="y"></param>
        /// <param name="sig"></param>
        /// <returns></returns>
        float Gaussian(float x, float y, float sig)
        {
            float pi = (float)Math.PI;
            return 1f / (2f * pi * sig * sig) * (float)Math.Exp(-(x * x + y * y) / (2.0f * sig * sig));
        }

五.图形匹配

代码会继续更新,未完稍后继续....................

 

原代码,会不断更新....

链接: https://pan.baidu.com/s/1LUAzTbcV75f0E3AZrkp7Lw 提取码: am8y 

实验测试平台运行环境  Visual studio 2015-2019  C#