本文整理了Java中org.opencv.core.Core.flip()
方法的一些代码示例,展示了Core.flip()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Core.flip()
方法的具体详情如下:
包路径:org.opencv.core.Core
类名称:Core
方法名:flip
[英]Flips a 2D array around vertical, horizontal, or both axes.
The function flip
flips the array in one of three different ways (row and column indices are 0-based):
dst _(ij) =<BR>
The example scenarios of using the function are the following:
flipCode == 0
) to switch between top-left and bottom-left image origin. This is a typical operation in video processing on Microsoft Windows* OS.flipCode > 0
).flipCode < 0
).flipCode > 0
or flipCode == 0
).flip
以三种不同方式之一翻转数组(行和列索引基于0):flipCode == 0
)以在左上角和左下角图像原点之间切换。这是Microsoft Windows操作系统上视频处理的典型操作。flipCode > 0
)。flipCode < 0
)。flipCode > 0
或flipCode == 0
)。代码示例来源:origin: ytai/IOIOPlotter
private static void rotateCCW(Mat mat) {
Core.transpose(mat, mat);
Core.flip(mat, mat, 0);
}
代码示例来源:origin: nroduit/Weasis
public static ImageCV flip(Mat source, int flipCvType) {
if (flipCvType < 0) {
return ImageCV.toImageCV(source);
}
Objects.requireNonNull(source);
ImageCV dstImg = new ImageCV();
Core.flip(source, dstImg, flipCvType);
return dstImg;
}
代码示例来源:origin: ytai/IOIOPlotter
Core.flip(srcImage_, srcImage_, 1);
代码示例来源:origin: ytai/IOIOPlotter
Core.flip(srcImage_, edgesImage_, 1);
} else {
srcImage_.copyTo(edgesImage_);
代码示例来源:origin: Qualeams/Android-Face-Recognition-with-Deep-Learning-Test-Framework
Core.flip(imgRgba,imgRgba,1);
代码示例来源:origin: Qualeams/Android-Face-Recognition-with-Deep-Learning-Test-Framework
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat imgRgba = inputFrame.rgba();
Mat img = new Mat();
imgRgba.copyTo(img);
List<Mat> images = ppF.getProcessedImage(img, PreProcessorFactory.PreprocessingMode.RECOGNITION);
Rect[] faces = ppF.getFacesForRecognition();
// Selfie / Mirror mode
if(front_camera){
Core.flip(imgRgba,imgRgba,1);
}
if(images == null || images.size() == 0 || faces == null || faces.length == 0 || ! (images.size() == faces.length)){
// skip
return imgRgba;
} else {
faces = MatOperation.rotateFaces(imgRgba, faces, ppF.getAngleForRecognition());
for(int i = 0; i<faces.length; i++){
MatOperation.drawRectangleAndLabelOnPreview(imgRgba, faces[i], rec.recognize(images.get(i), ""), front_camera);
}
return imgRgba;
}
}
代码示例来源:origin: Qualeams/Android-Face-Recognition-with-Deep-Learning-Test-Framework
@Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat imgRgba = inputFrame.rgba();
Mat img = new Mat();
imgRgba.copyTo(img);
List<Mat> images = ppF.getCroppedImage(img);
Rect[] faces = ppF.getFacesForRecognition();
// Selfie / Mirror mode
if(front_camera){
Core.flip(imgRgba,imgRgba,1);
}
if(images == null || images.size() == 0 || faces == null || faces.length == 0 || ! (images.size() == faces.length)){
// skip
return imgRgba;
} else {
faces = MatOperation.rotateFaces(imgRgba, faces, ppF.getAngleForRecognition());
for(int i = 0; i<faces.length; i++){
MatOperation.drawRectangleAndLabelOnPreview(imgRgba, faces[i], "", front_camera);
}
return imgRgba;
}
}
代码示例来源:origin: openpnp/openpnp
Core.flip(timage.t(), timage, 1);
代码示例来源:origin: openpnp/openpnp
protected BufferedImage transformImage(BufferedImage image) {
Mat mat = OpenCvUtils.toMat(image);
mat = crop(mat);
mat = calibrate(mat);
mat = undistort(mat);
// apply affine transformations
mat = scale(mat, scaleWidth, scaleHeight);
mat = rotate(mat, rotation);
mat = offset(mat, offsetX, offsetY);
mat = deinterlace(mat);
if (flipX || flipY) {
int flipCode;
if (flipX && flipY) {
flipCode = -1;
}
else {
flipCode = flipX ? 0 : 1;
}
Core.flip(mat, mat, flipCode);
}
image = OpenCvUtils.toBufferedImage(mat);
mat.release();
return image;
}
内容来源于网络,如有侵权,请联系作者删除!