com.mongodb.gridfs.GridFS类的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(9.4k)|赞(0)|评价(0)|浏览(157)

本文整理了Java中com.mongodb.gridfs.GridFS类的一些代码示例,展示了GridFS类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。GridFS类的具体详情如下:
包路径:com.mongodb.gridfs.GridFS
类名称:GridFS

GridFS介绍

[英]Implementation of GridFS - a specification for storing and retrieving files that exceed the BSON-document size limit of 16MB.

Instead of storing a file in a single document, GridFS divides a file into parts, or chunks, and stores each of those chunks as a separate document. By default GridFS limits chunk size to 255k. GridFS uses two collections to store files. One collection stores the file chunks, and the other stores file metadata.

When you query a GridFS store for a file, the driver or client will reassemble the chunks as needed. You can perform range queries on files stored through GridFS. You also can access information from arbitrary sections of files, which allows you to "skip" into the middle of a video or audio file.

GridFS is useful not only for storing files that exceed 16MB but also for storing any files for which you want access without having to load the entire file into memory. For more information on the indications of GridFS, see MongoDB official documentation.
[中]GridFS的实现—用于存储和检索超过BSON文档大小限制16MB的文件的规范。
GridFS没有将文件存储在单个文档中,而是将文件划分为多个部分或块,并将每个块存储为单独的文档。默认情况下,GridFS将块大小限制为255k。GridFS使用两个集合来存储文件。一个集合存储文件块,另一个存储文件元数据。
当您在GridFS存储区中查询文件时,驱动程序或客户端将根据需要重新组合块。您可以对通过GridFS存储的文件执行范围查询。您还可以从文件的任意部分访问信息,这允许您“跳过”到视频或音频文件的中间。
GridFS不仅适用于存储超过16MB的文件,还适用于存储您想要访问的任何文件,而无需将整个文件加载到内存中。有关GridFS指示的更多信息,请参阅MongoDB官方文档。

代码示例

代码示例来源:origin: org.mongodb/mongo-java-driver

  1. @SuppressWarnings("deprecation") // We know GridFS uses the old API. A new API version will be address later.
  2. private static GridFS getGridFS() throws Exception {
  3. if (gridFS == null) {
  4. gridFS = new GridFS(getMongo().getDB(db));
  5. }
  6. return gridFS;
  7. }

代码示例来源:origin: org.mongodb/mongo-java-driver

  1. DBCursor fileListCursor = fs.getFileList();
  2. try {
  3. while (fileListCursor.hasNext()) {
  4. GridFS fs = getGridFS();
  5. String fn = args[i + 1];
  6. GridFSDBFile f = fs.findOne(fn);
  7. if (f == null) {
  8. System.err.println("can't find file: " + fn);
  9. GridFS fs = getGridFS();
  10. String fn = args[i + 1];
  11. GridFSInputFile f = fs.createFile(new File(fn));
  12. f.save();
  13. f.validate();
  14. return;
  15. GridFS fs = getGridFS();
  16. String fn = args[i + 1];
  17. GridFSDBFile f = fs.findOne(fn);
  18. if (f == null) {
  19. System.err.println("can't find file: " + fn);

代码示例来源:origin: org.mongodb/mongo-java-driver

  1. /**
  2. * Removes all files matching the given filename.
  3. *
  4. * @param filename the name of the file to be removed
  5. * @throws com.mongodb.MongoException if the operation fails
  6. */
  7. public void remove(final String filename) {
  8. if (filename == null) {
  9. throw new IllegalArgumentException("filename can not be null");
  10. }
  11. remove(new BasicDBObject("filename", filename));
  12. }

代码示例来源:origin: org.apache.camel/camel-mongodb-gridfs

  1. if (ptsCollection.count() < 1000) {
  2. ptsCollection.createIndex(new BasicDBObject("id", 1));
  3. persistentTimestamp = ptsCollection.findOne(new BasicDBObject("id", endpoint.getPersistentTSObject()));
  4. if (persistentTimestamp == null) {
  5. persistentTimestamp = new BasicDBObject("id", endpoint.getPersistentTSObject());
  6. fromDate = new java.util.Date();
  7. persistentTimestamp.put("timestamp", fromDate);
  8. file = endpoint.getGridFs().findOne(new BasicDBObject("_id", file.getId()));

代码示例来源:origin: org.mongodb/mongo-java-driver

  1. /**
  2. * Finds one file matching the given objectId.
  3. *
  4. * @param objectId the objectId of the file stored on a server
  5. * @return a gridfs file
  6. * @throws com.mongodb.MongoException if the operation fails
  7. */
  8. public GridFSDBFile findOne(final ObjectId objectId) {
  9. return findOne(new BasicDBObject("_id", objectId));
  10. }

代码示例来源:origin: richardwilly98/elasticsearch-river-mongodb

  1. logger.info("MongoDBRiver is beginning initial import of " + collection.getFullName());
  2. boolean inProgress = true;
  3. String lastId = null;
  4. if (logger.isTraceEnabled()) {
  5. logger.trace("Collection {} - count: {}", collection.getName(), safeCount(collection, timestamp.getClass()));
  6. .find(getFilterForInitialImport(definition.getMongoCollectionFilter(), lastId))
  7. .sort(new BasicDBObject("_id", 1));
  8. while (cursor.hasNext() && context.getStatus() == Status.RUNNING) {
  9. DBObject object = cursor.next();
  10. GridFS grid = new GridFS(mongoClient.getDB(definition.getMongoDb()), definition.getMongoCollection());
  11. cursor = grid.getFileList();
  12. while (cursor.hasNext()) {
  13. DBObject object = cursor.next();
  14. if (object instanceof GridFSDBFile) {
  15. GridFSDBFile file = grid.findOne(new ObjectId(object.get(MongoDBRiver.MONGODB_ID_FIELD).toString()));
  16. if (cursor.hasNext()) {
  17. lastId = addInsertToStream(null, file);

代码示例来源:origin: org.mongodb/mongo-java-driver

  1. /**
  2. * Finds a list of files matching the given filename.
  3. *
  4. * @param filename the filename to look for
  5. * @return list of gridfs files
  6. * @throws com.mongodb.MongoException if the operation fails
  7. */
  8. public List<GridFSDBFile> find(final String filename) {
  9. return find(new BasicDBObject("filename", filename));
  10. }

代码示例来源:origin: Findwise/Hydra

  1. @Override
  2. public boolean save(Object id, String fileName, InputStream file) {
  3. pipelinefs.remove(new BasicDBObject(MongoDocument.MONGO_ID_KEY, id));
  4. GridFSInputFile inputFile = pipelinefs.createFile(file, fileName);
  5. inputFile.put("_id", id);
  6. inputFile.save();
  7. return true;
  8. }

代码示例来源:origin: stackoverflow.com

  1. String mystring = new String(); // an empty string
  2. GridFS gridFS = new GridFS(mongoTemplate.getDB(),"noteAndFile");
  3. GridFSInputFile gfsFile = gridFS.createFile(
  4. new ByteArrayInputStream( mystring.getBytes() )
  5. );
  6. BasicDBObject meta = new BasicDBObject();
  7. meta.put("comments","hi");
  8. gfsFile.put("metadata",meta);
  9. gfsFile.save();
  10. System.out.println(gfsFile.getId()); // gives me the _id of the object saved

代码示例来源:origin: Findwise/Hydra

  1. @Override
  2. @Deprecated
  3. public void removeInactiveFiles() {
  4. BasicDBObject query = new BasicDBObject();
  5. query.put(MongoPipelineReader.ACTIVE_KEY, Stage.Mode.INACTIVE.toString());
  6. List<GridFSDBFile> list = pipelinefs.find(query);
  7. for(GridFSDBFile file : list) {
  8. pipelinefs.remove(file);
  9. }
  10. }

代码示例来源:origin: com.cognifide.aet/datastorage

  1. @Override
  2. public Artifact getArtifact(DBKey dbKey, String objectID) {
  3. Artifact artifact = null;
  4. GridFS gfs = getGridFS(dbKey);
  5. BasicDBObject query = new BasicDBObject();
  6. query.put(ID_FIELD_NAME, new ObjectId(objectID));
  7. GridFSDBFile file = gfs.findOne(query);
  8. if (file != null) {
  9. artifact = new Artifact(file.getInputStream(), file.getContentType());
  10. }
  11. return artifact;
  12. }

代码示例来源:origin: richardwilly98/elasticsearch-river-mongodb

  1. entry.put(MongoDBRiver.OPLOG_OBJECT, object = new BasicDBObject(MongoDBRiver.MONGODB_ID_FIELD, objectId));
  2. throw new NullPointerException(MongoDBRiver.MONGODB_ID_FIELD);
  3. GridFS grid = new GridFS(mongoShardClient.getDB(definition.getMongoDb()), collection);
  4. GridFSDBFile file = grid.findOne(new ObjectId(objectId));
  5. if (file != null) {
  6. logger.trace("Caught file: {} - {}", file.getId(), file.getFilename());

代码示例来源:origin: org.apache.camel/camel-mongodb-gridfs

  1. GridFSInputFile gfsFile = endpoint.getGridFs().createFile(ins, filename, true);
  2. if (chunkSize != null && chunkSize > 0) {
  3. gfsFile.setChunkSize(chunkSize);
  4. gfsFile.setContentType(ct);
  5. gfsFile.setMetaData(dbObject);
  6. } else if ("remove".equals(operation)) {
  7. final String filename = exchange.getIn().getHeader(Exchange.FILE_NAME, String.class);
  8. endpoint.getGridFs().remove(filename);
  9. } else if ("findOne".equals(operation)) {
  10. final String filename = exchange.getIn().getHeader(Exchange.FILE_NAME, String.class);
  11. GridFSDBFile file = endpoint.getGridFs().findOne(filename);
  12. if (file != null) {
  13. exchange.getIn().setHeader(GridFsEndpoint.GRIDFS_METADATA, JSON.serialize(file.getMetaData()));
  14. DBCursor cursor;
  15. if (filename == null) {
  16. cursor = endpoint.getGridFs().getFileList();
  17. } else {
  18. cursor = endpoint.getGridFs().getFileList(new BasicDBObject("filename", filename));
  19. DBCursor cursor;
  20. if (filename == null) {
  21. cursor = endpoint.getGridFs().getFileList();
  22. } else {
  23. cursor = endpoint.getGridFs().getFileList(new BasicDBObject("filename", filename));

代码示例来源:origin: Findwise/Hydra

  1. @Override
  2. public void deleteAll() {
  3. documents.remove(new BasicDBObject());
  4. documentfs.remove(new BasicDBObject());
  5. }

代码示例来源:origin: org.apache.jackrabbit/oak-mongomk

  1. private String saveBlob() throws IOException {
  2. BufferedInputStream bis = new BufferedInputStream(is);
  3. String md5 = calculateMd5(bis);
  4. GridFSDBFile gridFile = gridFS.findOne(new BasicDBObject("md5", md5));
  5. if (gridFile != null) {
  6. is.close();
  7. return md5;
  8. }
  9. GridFSInputFile gridFSInputFile = gridFS.createFile(bis, true);
  10. gridFSInputFile.save();
  11. return gridFSInputFile.getMD5();
  12. }

代码示例来源:origin: Findwise/Hydra

  1. @Override
  2. public boolean deleteFile(Object id) {
  3. DBObject obj = new BasicDBObject(MongoDocument.MONGO_ID_KEY, id);
  4. if (pipelinefs.find(obj).size()==0) {
  5. return false;
  6. }
  7. pipelinefs.remove(obj);
  8. return true;
  9. }
  10. }

代码示例来源:origin: com.commercehub.jclouds/jclouds-gridfs-blobstore

  1. private static GridFSDBFile getMostRecentlyUploadedFile(GridFS gridFS, String filename) {
  2. DBObject queryByFilename = new BasicDBObject("filename", filename);
  3. DBObject sortByUploadDateDescending = new BasicDBObject("uploadDate", -1);
  4. DBCursor dbCursor = gridFS.getFileList(queryByFilename, sortByUploadDateDescending);
  5. return dbCursor.hasNext() ? getGridFSDBFileForDBObject(gridFS, dbCursor.next()) : null;
  6. }

代码示例来源:origin: org.mongodb.mongo-hadoop/mongo-hadoop-core

  1. MongoClientURI inputURI = MongoConfigUtil.getInputURI(conf);
  2. GridFS gridFS = new GridFS(
  3. inputCollection.getDB(),
  4. inputCollection.getName());
  5. for (GridFSDBFile file : gridFS.find(query)) {

代码示例来源:origin: xbwen/bugu-mongo

  1. public String save(InputStream is, String filename, Map<String, Object> attributes){
  2. GridFSInputFile f = fs.createFile(is);
  3. f.setChunkSize(chunkSize);
  4. f.setFilename(filename);
  5. setAttributes(f, attributes);
  6. f.save();
  7. return f.getId().toString();
  8. }

代码示例来源:origin: org.mongodb.mongo-hadoop/mongo-hadoop-core

  1. private GridFS getGridFS() {
  2. if (null == gridFS) {
  3. DBCollection rootCollection =
  4. MongoConfigUtil.getCollection(inputURI);
  5. gridFS = new GridFS(
  6. rootCollection.getDB(), rootCollection.getName());
  7. }
  8. return gridFS;
  9. }

相关文章