本文整理了Java中org.opencv.core.Core.absdiff()
方法的一些代码示例,展示了Core.absdiff()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Core.absdiff()
方法的具体详情如下:
包路径:org.opencv.core.Core
类名称:Core
方法名:absdiff
[英]Calculates the per-element absolute difference between two arrays or between an array and a scalar.
The function absdiff
calculates:
dst(I) = saturate(| src1(I) - src2(I)|)
Scalar
or has as many elements as the number of channels in src1
:dst(I) = saturate(| src1(I) - src2|)
Scalar
or has as many elements as the number of channels in src2
:dst(I) = saturate(| src1 - src2(I)|)
where I
is a multi-dimensional index of array elements. In case of multi-channel arrays, each channel is processed independently.
Note: Saturation is not applied when the arrays have the depth CV_32S
. You may even get a negative value in the case of overflow.
[中]计算两个数组之间或数组与标量之间的每元素绝对差。
函数absdiff
计算:
*大小和类型相同的两个数组之间的绝对差异:
dst(I)=饱和(| src1(I)-src2(I)|)
*当第二个数组由Scalar
构造或元素数与src1
中的通道数相同时,数组与标量之间的绝对差值:
dst(I)=饱和(| src1(I)-src2 |)
*当第一个数组由Scalar
构造或元素数与src2
中的通道数相同时,标量与数组之间的绝对差值:
dst(I)=饱和(| src1-src2(I)|)
其中I
是数组元素的多维索引。对于多通道阵列,每个通道都是独立处理的。
注意:当阵列的深度为CV_32S
时,不应用饱和度。在溢出的情况下,您甚至可能会得到负值。
代码示例来源:origin: RaiMan/SikuliX2
public static List<Element> detectChanges(Mat base, Mat mChanged) {
int PIXEL_DIFF_THRESHOLD = 3;
int IMAGE_DIFF_THRESHOLD = 5;
Mat mBaseGray = Element.getNewMat();
Mat mChangedGray = Element.getNewMat();
Mat mDiffAbs = Element.getNewMat();
Mat mDiffTresh = Element.getNewMat();
Mat mChanges = Element.getNewMat();
List<Element> rectangles = new ArrayList<>();
Imgproc.cvtColor(base, mBaseGray, toGray);
Imgproc.cvtColor(mChanged, mChangedGray, toGray);
Core.absdiff(mBaseGray, mChangedGray, mDiffAbs);
Imgproc.threshold(mDiffAbs, mDiffTresh, PIXEL_DIFF_THRESHOLD, 0.0, Imgproc.THRESH_TOZERO);
if (Core.countNonZero(mDiffTresh) > IMAGE_DIFF_THRESHOLD) {
Imgproc.threshold(mDiffAbs, mDiffAbs, PIXEL_DIFF_THRESHOLD, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(mDiffAbs, mDiffAbs, Element.getNewMat());
Mat se = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(5, 5));
Imgproc.morphologyEx(mDiffAbs, mDiffAbs, Imgproc.MORPH_CLOSE, se);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat mHierarchy = Element.getNewMat();
Imgproc.findContours(mDiffAbs, contours, mHierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
rectangles = contoursToRectangle(contours);
Core.subtract(mDiffAbs, mDiffAbs, mChanges);
Imgproc.drawContours(mChanges, contours, -1, new Scalar(255));
//logShow(mDiffAbs);
}
return rectangles;
}
代码示例来源:origin: JavaOpenCVBook/code
public Mat process(Mat inputImage) {
Mat foregroundImage = new Mat();
Core.absdiff(backgroundImage,inputImage , foregroundImage);
return foregroundImage;
}
代码示例来源:origin: openpnp/openpnp
/**
* Calculate the absolute difference between the previously stored Mat called source1 and the
* current Mat.
*
* @param source1
* @param tag
*/
public FluentCv absDiff(String source1, String... tag) {
Core.absdiff(get(source1), mat, mat);
return store(mat, tag);
}
代码示例来源:origin: openpnp/openpnp
public static double calculatePsnr(Mat I1, Mat I2) {
Mat s1 = new Mat();
Core.absdiff(I1, I2, s1); // |I1 - I2|
s1.convertTo(s1, CvType.CV_32F); // cannot make a square on 8 bits
s1 = s1.mul(s1); // |I1 - I2|^2
Scalar s = Core.sumElems(s1); // sum elements per channel
double sse = s.val[0] + s.val[1] + s.val[2]; // sum channels
if (sse <= 1e-10) {
// for small values return zero
return 0;
}
else {
double mse = sse / (double) (I1.channels() * I1.total());
double psnr = 10.0 * Math.log10((255 * 255) / mse);
return psnr;
}
}
代码示例来源:origin: openpnp/openpnp
Core.absdiff(lastSettleMat, mat, diff);
MinMaxLocResult result = Core.minMaxLoc(diff);
Logger.debug("autoSettleAndCapture auto settle score: " + result.maxVal);
代码示例来源:origin: JavaOpenCVBook/code
public Mat process(Mat inputImage) {
Mat foregroundThresh = new Mat();
// Firstly, convert to gray-level image, yields good results with performance
Imgproc.cvtColor(inputImage, inputGray, Imgproc.COLOR_BGR2GRAY);
// initialize background to 1st frame, convert to floating type
if (accumulatedBackground.empty())
inputGray.convertTo(accumulatedBackground, CvType.CV_32F);
// convert background to 8U, for differencing with input image
accumulatedBackground.convertTo(backImage,CvType.CV_8U);
// compute difference between image and background
Core.absdiff(backImage,inputGray,foreground);
// apply threshold to foreground image
Imgproc.threshold(foreground,foregroundThresh, threshold,255, Imgproc.THRESH_BINARY_INV);
// accumulate background
Mat inputFloating = new Mat();
inputGray.convertTo(inputFloating, CvType.CV_32F);
Imgproc.accumulateWeighted(inputFloating, accumulatedBackground,learningRate, foregroundThresh);
return negative(foregroundThresh);
}
代码示例来源:origin: com.sikulix/sikulixapi
public boolean hasChanges(Mat current) {
int PIXEL_DIFF_THRESHOLD = 5;
int IMAGE_DIFF_THRESHOLD = 5;
Mat bg = new Mat();
Mat cg = new Mat();
Mat diff = new Mat();
Mat tdiff = new Mat();
Imgproc.cvtColor(base, bg, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(current, cg, Imgproc.COLOR_BGR2GRAY);
Core.absdiff(bg, cg, diff);
Imgproc.threshold(diff, tdiff, PIXEL_DIFF_THRESHOLD, 0.0, Imgproc.THRESH_TOZERO);
if (Core.countNonZero(tdiff) <= IMAGE_DIFF_THRESHOLD) {
return false;
}
Imgproc.threshold(diff, diff, PIXEL_DIFF_THRESHOLD, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(diff, diff, new Mat());
Mat se = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(5,5));
Imgproc.morphologyEx(diff, diff, Imgproc.MORPH_CLOSE, se);
List<MatOfPoint> points = new ArrayList<MatOfPoint>();
Mat contours = new Mat();
Imgproc.findContours(diff, points, contours, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
int n = 0;
for (Mat pm: points) {
log(lvl, "(%d) %s", n++, pm);
printMatI(pm);
}
log(lvl, "contours: %s", contours);
printMatI(contours);
return true;
}
内容来源于网络,如有侵权,请联系作者删除!