org.openimaj.video.Video.getNextFrame()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(7.9k)|赞(0)|评价(0)|浏览(141)

本文整理了Java中org.openimaj.video.Video.getNextFrame()方法的一些代码示例,展示了Video.getNextFrame()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Video.getNextFrame()方法的具体详情如下:
包路径:org.openimaj.video.Video
类名称:Video
方法名:getNextFrame

Video.getNextFrame介绍

[英]Get the next frame. Increments the frame counter by 1.
[中]获取下一帧。将帧计数器增加1。

代码示例

代码示例来源:origin: org.openimaj/core-video

@Override
public T next() {
  return video.getNextFrame();
}

代码示例来源:origin: openimaj/openimaj

@Override
public T next() {
  return video.getNextFrame();
}

代码示例来源:origin: openimaj/openimaj

@Override
public OUTPUT getNextFrame()
{
  return currentFrame = translateFrame(video.getNextFrame());
}

代码示例来源:origin: org.openimaj/core-video

@Override
public OUTPUT getNextFrame()
{
  return currentFrame = translateFrame(video.getNextFrame());
}

代码示例来源:origin: openimaj/openimaj

/**
 * {@inheritDoc}
 *
 * @see org.openimaj.video.Video#getNextFrame()
 */
@Override
public T getNextFrame()
{
  if (this.video == null)
    throw new UnsupportedOperationException(
        "Chain method called on non-chainable processor");
  currentFrame = this.video.getNextFrame();
  if (currentFrame == null)
    return null;
  return processFrame(currentFrame);
}

代码示例来源:origin: org.openimaj/core-video

/**
 * {@inheritDoc}
 *
 * @see org.openimaj.video.Video#getNextFrame()
 */
@Override
public T getNextFrame()
{
  if (this.video == null)
    throw new UnsupportedOperationException(
        "Chain method called on non-chainable processor");
  currentFrame = this.video.getNextFrame();
  if (currentFrame == null)
    return null;
  return processFrame(currentFrame);
}

代码示例来源:origin: openimaj/openimaj

/**
 * Set the current frame index (i.e. skips to a certain frame). If your
 * video subclass can implement this in a cleverer way, then override this
 * method, otherwise this method will simply grab frames until it gets to
 * the given frame index. This method is naive and may take some time as
 * each frame will be decoded by the video decoder.
 * 
 * @param newFrame
 *            the new index
 */
public synchronized void setCurrentFrameIndex(long newFrame)
{
  // We're already at the frame?
  if (this.currentFrame == newFrame)
    return;
  // If we're ahread of where we want to be
  if (this.currentFrame > newFrame)
  {
    this.reset();
  }
  // Grab frames until we read the new frame counter
  // (or until the getNextFrame() method returns null)
  while (this.currentFrame < newFrame && getNextFrame() != null)
    ;
}

代码示例来源:origin: org.openimaj/core-video

/**
 * Set the current frame index (i.e. skips to a certain frame). If your
 * video subclass can implement this in a cleverer way, then override this
 * method, otherwise this method will simply grab frames until it gets to
 * the given frame index. This method is naive and may take some time as
 * each frame will be decoded by the video decoder.
 * 
 * @param newFrame
 *            the new index
 */
public synchronized void setCurrentFrameIndex(long newFrame)
{
  // We're already at the frame?
  if (this.currentFrame == newFrame)
    return;
  // If we're ahread of where we want to be
  if (this.currentFrame > newFrame)
  {
    this.reset();
  }
  // Grab frames until we read the new frame counter
  // (or until the getNextFrame() method returns null)
  while (this.currentFrame < newFrame && getNextFrame() != null)
    ;
}

代码示例来源:origin: org.openimaj/core-video

/**
 * Process the given video using this processor.
 *
 * @param video
 *            The video to process.
 */
public void process(Video<T> video)
{
  T frame = null;
  while ((frame = video.getNextFrame()) != null)
    processFrame(frame);
  processingComplete();
}

代码示例来源:origin: openimaj/openimaj

@Override
public ImageCollectionEntry<T> next() {
  final T image = video.getNextFrame();
  final ImageCollectionEntry<T> entry = new ImageCollectionEntry<T>();
  entry.meta = new HashMap<String, String>();
  entry.meta.put("timestamp", "" + this.frameCount / this.video.getFPS());
  entry.accepted = selection.acceptEntry(image);
  entry.image = image;
  this.frameCount++;
  // hack to stop the iterator at the end until hasNext works properly
  if (image == null)
    frameCount = -1;
  return entry;
}

代码示例来源:origin: openimaj/openimaj

/**
 * Process the given video using this processor.
 *
 * @param video
 *            The video to process.
 */
public void process(Video<T> video)
{
  T frame = null;
  while ((frame = video.getNextFrame()) != null)
    processFrame(frame);
  processingComplete();
}

代码示例来源:origin: org.openimaj/core-video

nextFrame = this.video.getNextFrame();
  nextFrameTimestamp = this.video.getTimeStamp();
nextFrame = this.video.getNextFrame();
nextFrameTimestamp = this.video.getTimeStamp();
if (this.currentFrame == null && (this.timeKeeper instanceof VideoDisplay.BasicVideoTimeKeeper))

代码示例来源:origin: org.openimaj/core-video

/**
 *     Cache the whole of the given video.
 *  @param <I> Type of {@link Image} 
 * 
 *    @param video The video to cache
 *    @return A {@link VideoCache}
 */
public static <I extends Image<?,I>> VideoCache<I> cacheVideo( Video<I> video )
{
  VideoCache<I> vc = new VideoCache<I>( video.getWidth(), 
      video.getHeight(), video.getFPS() );
  video.reset();
  while( video.hasNextFrame() )
    vc.addFrame( video.getNextFrame().clone() );
  return vc;
}

代码示例来源:origin: openimaj/openimaj

/**
 *     Cache the whole of the given video.
 *  @param <I> Type of {@link Image} 
 * 
 *    @param video The video to cache
 *    @return A {@link VideoCache}
 */
public static <I extends Image<?,I>> VideoCache<I> cacheVideo( Video<I> video )
{
  VideoCache<I> vc = new VideoCache<I>( video.getWidth(), 
      video.getHeight(), video.getFPS() );
  video.reset();
  while( video.hasNextFrame() )
    vc.addFrame( video.getNextFrame().clone() );
  return vc;
}

代码示例来源:origin: openimaj/openimaj

/**
   *     Cache the given time range from the given video.
   * 
   *    @param <I> The type of the video frames
   *    @param video The video to cache
   *    @param start The start of the video to cache
   *    @param end The end of the video to cache
   *    @return A {@link VideoCache}
   */
  public static <I extends Image<?,I>> VideoCache<I> cacheVideo( Video<I> video,
      VideoTimecode start, VideoTimecode end )
  {
    VideoCache<I> vc = new VideoCache<I>( video.getWidth(), 
        video.getHeight(), video.getFPS() );
    video.setCurrentFrameIndex( start.getFrameNumber() );
    while( video.hasNextFrame() && 
        video.getCurrentFrameIndex() < end.getFrameNumber() )
      vc.addFrame( video.getNextFrame().clone() );
    return vc;
  }
}

代码示例来源:origin: org.openimaj/core-video

/**
   *     Cache the given time range from the given video.
   * 
   *    @param <I> The type of the video frames
   *    @param video The video to cache
   *    @param start The start of the video to cache
   *    @param end The end of the video to cache
   *    @return A {@link VideoCache}
   */
  public static <I extends Image<?,I>> VideoCache<I> cacheVideo( Video<I> video,
      VideoTimecode start, VideoTimecode end )
  {
    VideoCache<I> vc = new VideoCache<I>( video.getWidth(), 
        video.getHeight(), video.getFPS() );
    video.setCurrentFrameIndex( start.getFrameNumber() );
    while( video.hasNextFrame() && 
        video.getCurrentFrameIndex() < end.getFrameNumber() )
      vc.addFrame( video.getNextFrame().clone() );
    return vc;
  }
}

代码示例来源:origin: org.openimaj/sandbox

FeatureTable trackFeatures(Video<FImage> video, int nFeatures, boolean replace) {
  final TrackingContext tc = new TrackingContext();
  final FeatureList fl = new FeatureList(nFeatures);
  final FeatureTable ft = new FeatureTable(nFeatures);
  final KLTTracker tracker = new KLTTracker(tc, fl);
  tc.setSequentialMode(true);
  tc.setWriteInternalImages(false);
  tc.setAffineConsistencyCheck(-1);
  FImage prev = video.getCurrentFrame();
  tracker.selectGoodFeatures(prev);
  ft.storeFeatureList(fl, 0);
  while (video.hasNextFrame()) {
    final FImage next = video.getNextFrame();
    tracker.trackFeatures(prev, next);
    if (replace)
      tracker.replaceLostFeatures(next);
    prev = next;
    ft.storeFeatureList(fl, video.getCurrentFrameIndex());
  }
  return ft;
}

代码示例来源:origin: openimaj/openimaj

FeatureTable trackFeatures(Video<FImage> video, int nFeatures, boolean replace) {
  final TrackingContext tc = new TrackingContext();
  final FeatureList fl = new FeatureList(nFeatures);
  final FeatureTable ft = new FeatureTable(nFeatures);
  final KLTTracker tracker = new KLTTracker(tc, fl);
  tc.setSequentialMode(true);
  tc.setWriteInternalImages(false);
  tc.setAffineConsistencyCheck(-1);
  FImage prev = video.getCurrentFrame();
  tracker.selectGoodFeatures(prev);
  ft.storeFeatureList(fl, 0);
  while (video.hasNextFrame()) {
    final FImage next = video.getNextFrame();
    tracker.trackFeatures(prev, next);
    if (replace)
      tracker.replaceLostFeatures(next);
    prev = next;
    ft.storeFeatureList(fl, video.getCurrentFrameIndex());
  }
  return ft;
}

代码示例来源:origin: openimaj/openimaj

image = video.getNextFrame();

代码示例来源:origin: org.openimaj/video-processing

image = video.getNextFrame();

相关文章