org.opencv.imgproc.Imgproc.matchTemplate()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(8.7k)|赞(0)|评价(0)|浏览(455)

本文整理了Java中org.opencv.imgproc.Imgproc.matchTemplate()方法的一些代码示例,展示了Imgproc.matchTemplate()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Imgproc.matchTemplate()方法的具体详情如下:
包路径:org.opencv.imgproc.Imgproc
类名称:Imgproc
方法名:matchTemplate

Imgproc.matchTemplate介绍

[英]Compares a template against overlapped image regions.

The function slides through image, compares the overlapped patches of size w x h against templ using the specified method and stores the comparison results in result. Here are the formulae for the available comparison methods (I denotes image, Ttemplate, Rresult). The summation is done over template and/or the image patch: x' = 0...w-1, y' = 0...h-1

  • method=CV_TM_SQDIFF

R(x,y)= sum(by: x',y')(T(x',y')-I(x+x',y+y'))^2

  • method=CV_TM_SQDIFF_NORMED

R(x,y)= (sum_(x',y')(T(x',y')-I(x+x',y+y'))^2)/(sqrt(sum_(x',y')T(x',y')^2 * sum_(x',y') I(x+x',y+y')^2))

  • method=CV_TM_CCORR

R(x,y)= sum(by: x',y')(T(x',y') * I(x+x',y+y'))

  • method=CV_TM_CCORR_NORMED

R(x,y)= (sum_(x',y')(T(x',y') * I(x+x',y+y')))/(sqrt(sum_(x',y')T(x',y')^2 * sum_(x',y') I(x+x',y+y')^2))

  • method=CV_TM_CCOEFF

R(x,y)= sum(by: x',y')(T'(x',y') * I'(x+x',y+y'))

where

T'(x',y')=T(x',y') - 1/(w * h) * sum(by: x'',y'') T(x'',y'') I'(x+x',y+y')=I(x+x',y+y') - 1/(w * h) * sum(by: x'',y'') I(x+x'',y+y'')

  • method=CV_TM_CCOEFF_NORMED

R(x,y)= (sum_(x',y')(T'(x',y') * I'(x+x',y+y')))/(sqrt(sum_(x',y')T'(x',y')^2 * sum_(x',y') I'(x+x',y+y')^2))

After the function finishes the comparison, the best matches can be found as global minimums (when CV_TM_SQDIFF was used) or maximums (when CV_TM_CCORR or CV_TM_CCOEFF was used) using the "minMaxLoc" function. In case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels and separate mean values are used for each channel. That is, the function can take a color template and a color image. The result will still be a single-channel image, which is easier to analyze.

Note:

  • (Python) An example on how to match mouse selected regions in an image can be found at opencv_source_code/samples/python2/mouse_and_match.py
    [中]将模板与重叠的图像区域进行比较。
    函数在image之间滑动,使用指定的方法将大小为w x h的重叠面片与templ进行比较,并将比较结果存储在result中。以下是可用比较方法的公式(I表示imageTtemplateRresult)。对模板和/或图像补丁进行求和:x'=0。。。w-1,y'=0。。。h-1
    *方法=CV_TM_SQDIFF
    R(x,y)=和(由:x',y')(T(x',y')-I(x+x',y+y'))^2
    方法=CV_TM_SQDIFF_赋范
    R(x,y)=(和(x,y')(T(x',y')-I(x+x',y+y'))^2/(sqrt(和(x',y')T(x',y')^2和(x',y')I(x+x',y+y')^2))

    *方法=CV_TM_CCORR
    *R(x,y)=和(x',y')(T(x',y')I(x+x',y+y'))
    *方法=CV_TM_CCORR_标准
    R(x,y)=(和(x,y')(T(x',y')I(x+x',y+y'))/(sqrt(和(和)和(x',y')T(x',y')^2和(x',y')I(x+x',y+y')^2))
    *方法=CV_TM_Cceff
    *R(x,y)=和(x',y')(T'(x',y')I'(x+x',y+y'))
    哪里
    T'(x',y')=T(x',y')-1/(wh)和(by:x'',y'')和(x+x',y+y')=I(x+x',y+y')-1/(wh)和(by:x'',y'')和(x+x'',y+y'')
    *方法=CV_TM_Cceeff_赋范
    R(x,y)=(sum_uux(x',y')(T'(x',y')I'(x+x',y+y'))/(sqrt(sum_ux(x',y')T'(x',y')^2sum_ux(x',y')I'(x+x',y+y')^2))
    函数完成比较后,可以使用“minMaxLoc”函数以全局最小值(使用CV_TM_SQDIFF时)或最大值(使用CV_TM_CCORRCV_TM_CCOEFF时)的形式找到最佳匹配。对于彩色图像,分子中的模板和分母中的每个和在所有通道上进行,并且每个通道使用单独的平均值。也就是说,该函数可以获取颜色模板和彩色图像。结果仍然是单通道图像,更易于分析。
    注:
    *(Python)有关如何在图像中匹配鼠标选定区域的示例可以在opencv_源代码/samples/python2/mouse_和_匹配中找到。派克

代码示例

代码示例来源:origin: RaiMan/SikuliX2

private Mat doFindMatch(Element target, Mat mBase, Element probe) {
 if (SX.isNull(probe)) {
  probe = target;
 }
 Mat mResult = Element.getNewMat();
 Mat mProbe = probe.getContentBGR();
 if (!target.isPlainColor()) {
  if (probe.hasMask()) {
   Mat mMask = matMulti(probe.getMask(), mProbe.channels());
   Imgproc.matchTemplate(mBase, mProbe, mResult, Imgproc.TM_CCORR_NORMED, mMask);
  } else {
   Imgproc.matchTemplate(mBase, mProbe, mResult, Imgproc.TM_CCOEFF_NORMED);
  }
 } else {
  Mat mBasePlain = mBase;
  Mat mProbePlain = mProbe;
  if (target.isBlack()) {
   Core.bitwise_not(mBase, mBasePlain);
   Core.bitwise_not(mProbe, mProbePlain);
  }
  if (probe.hasMask()) {
   Mat mMask = matMulti(probe.getMask(), mProbe.channels());
   Imgproc.matchTemplate(mBasePlain, mProbePlain, mResult, Imgproc.TM_SQDIFF_NORMED, mMask);
  } else {
   Imgproc.matchTemplate(mBasePlain, mProbePlain, mResult, Imgproc.TM_SQDIFF_NORMED);
  }
  Core.subtract(Mat.ones(mResult.size(), CvType.CV_32F), mResult, mResult);
 }
 return mResult;
}

代码示例来源:origin: openpnp/openpnp

Result matchTemplate(Mat mat, Mat template) {
  Mat result = new Mat();
  Imgproc.matchTemplate(mat, template, result, Imgproc.TM_CCOEFF_NORMED);
  MinMaxLocResult mmr = Core.minMaxLoc(result);
  double maxVal = mmr.maxVal;
  double rangeMax = maxVal;
  // Since matchTemplate type is fixed to TM_CCOEFF_NORMED, corr is not actually needed
  // Using just threshold is enought
  List<TemplateMatch> matches = new ArrayList<>();
  for (Point point : OpenCvUtils.matMaxima(result, threshold, rangeMax)) {
    int x = point.x;
    int y = point.y;
    TemplateMatch match =
        new TemplateMatch(x, y, template.cols(), template.rows(), result.get(y, x)[0]);
    matches.add(match);
  }
  Collections.sort(matches, new Comparator<TemplateMatch>() {
    @Override
    public int compare(TemplateMatch o1, TemplateMatch o2) {
      return ((Double) o2.score).compareTo(o1.score);
    }
  });
  return new Result(result, matches);
}

代码示例来源:origin: com.sikulix/sikulixapi

private Core.MinMaxLocResult doFindMatch(Mat base, Mat probe) {
 Mat res = new Mat();
 Mat bi = new Mat();
 Mat pi = new Mat();
 if (!isPlainColor) {
  Imgproc.matchTemplate(base, probe, res, Imgproc.TM_CCOEFF_NORMED);
 } else {
  if (isBlack) {
   Core.bitwise_not(base, bi);
   Core.bitwise_not(probe, pi);
  } else {
   bi = base;
   pi = probe;
  }
  Imgproc.matchTemplate(bi, pi, res, Imgproc.TM_SQDIFF_NORMED);
  Core.subtract(Mat.ones(res.size(), CvType.CV_32F), res, res);
 }
 return Core.minMaxLoc(res);
}

代码示例来源:origin: com.infotel.seleniumRobot/core

private MinMaxLocResult getBestTemplateMatching(int matchMethod, Mat sceneImageMat, Mat objectImageMat) {
  
  // / Create the result matrix
  int resultCols = sceneImageMat.cols() - objectImageMat.cols() + 1;
  int resultRows = sceneImageMat.rows() - objectImageMat.rows() + 1;
  Mat result = new Mat(resultRows, resultCols, CvType.CV_32FC1);
  // / Do the Matching and Normalize
  Imgproc.matchTemplate(sceneImageMat, objectImageMat, result, matchMethod);
  // / Localizing the best match with minMaxLoc        
  return Core.minMaxLoc(result);
}

代码示例来源:origin: openpnp/openpnp

Mat result = new Mat();
Imgproc.matchTemplate(mat, template, result, Imgproc.TM_CCOEFF_NORMED);

代码示例来源:origin: io.github.martinschneider/justtestlah-core

Imgproc.matchTemplate(image, templ, result, Imgproc.TM_CCOEFF_NORMED);
MinMaxLocResult match = Core.minMaxLoc(result);
if (match.maxVal > bestMatch.maxVal) {
int resultRows = image.rows() - templ.rows() + 1;
Mat result = new Mat(resultRows, resultCols, CvType.CV_32FC1);
Imgproc.matchTemplate(image, templ, result, Imgproc.TM_CCOEFF_NORMED);
MinMaxLocResult match = Core.minMaxLoc(result);
if (match.maxVal > bestMatch.maxVal) {

代码示例来源:origin: openpnp/openpnp

@Override
public Point[] locateTemplateMatches(int roiX, int roiY, int roiWidth, int roiHeight, int coiX,
    int coiY, BufferedImage templateImage_) throws Exception {
  BufferedImage cameraImage_ = camera.capture();
  // Convert the camera image and template image to the same type. This
  // is required by the cvMatchTemplate call.
  templateImage_ =
      ImageUtils.convertBufferedImage(templateImage_, BufferedImage.TYPE_INT_ARGB);
  cameraImage_ = ImageUtils.convertBufferedImage(cameraImage_, BufferedImage.TYPE_INT_ARGB);
  Mat templateImage = OpenCvUtils.toMat(templateImage_);
  Mat cameraImage = OpenCvUtils.toMat(cameraImage_);
  Mat roiImage = new Mat(cameraImage, new Rect(roiX, roiY, roiWidth, roiHeight));
  // http://stackoverflow.com/questions/17001083/opencv-template-matching-example-in-android
  Mat resultImage = new Mat(roiImage.cols() - templateImage.cols() + 1,
      roiImage.rows() - templateImage.rows() + 1, CvType.CV_32FC1);
  Imgproc.matchTemplate(roiImage, templateImage, resultImage, Imgproc.TM_CCOEFF);
  MinMaxLocResult mmr = Core.minMaxLoc(resultImage);
  org.opencv.core.Point matchLoc = mmr.maxLoc;
  double matchValue = mmr.maxVal;
  // TODO: Figure out certainty and how to filter on it.
  Logger.debug(String.format("locateTemplateMatches certainty %f at %f, %f", matchValue,
      matchLoc.x, matchLoc.y));
  locateTemplateMatchesDebug(roiImage, templateImage, matchLoc);
  return new Point[] {new Point(((int) matchLoc.x) + roiX, ((int) matchLoc.y) + roiY)};
}

代码示例来源:origin: raulh82vlc/Image-Detection-Samples

/**
 * Matches concrete point of the eye by using template with TM_SQDIFF_NORMED
 */
private static void matchEye(Rect area, Mat builtTemplate, Mat matrixGray, Mat matrixRGBA) {
  Point matchLoc;
  try {
    // when there is not builtTemplate we skip it
    if (builtTemplate.cols() == 0 || builtTemplate.rows() == 0) {
      return;
    }
    Mat submatGray = matrixGray.submat(area);
    int cols = submatGray.cols() - builtTemplate.cols() + 1;
    int rows = submatGray.rows() - builtTemplate.rows() + 1;
    Mat outputTemplateMat = new Mat(cols, rows, CvType.CV_8U);
    Imgproc.matchTemplate(submatGray, builtTemplate, outputTemplateMat,
        Imgproc.TM_SQDIFF_NORMED);
    Core.MinMaxLocResult minMaxLocResult = Core.minMaxLoc(outputTemplateMat);
    // when is difference in matching methods, the best match is max / min value
    matchLoc = minMaxLocResult.minLoc;
    Point matchLocTx = new Point(matchLoc.x + area.x, matchLoc.y + area.y);
    Point matchLocTy = new Point(matchLoc.x + builtTemplate.cols() + area.x,
        matchLoc.y + builtTemplate.rows() + area.y);
    FaceDrawerOpenCV.drawMatchedEye(matchLocTx, matchLocTy, matrixRGBA);
  } catch (Exception e) {
    e.printStackTrace();
  }
}

代码示例来源:origin: openpnp/openpnp

Mat resultMat = new Mat();
Imgproc.matchTemplate(imageMat, templateMat, resultMat, Imgproc.TM_CCOEFF_NORMED);

相关文章

Imgproc类方法