本文整理了Java中org.eclipse.jetty.util.ajax.JSON.toString()
方法的一些代码示例,展示了JSON.toString()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JSON.toString()
方法的具体详情如下:
包路径:org.eclipse.jetty.util.ajax.JSON
类名称:JSON
方法名:toString
暂无
代码示例来源:origin: apache/hbase
@GET
@Path("{" + PATH + ":.*}")
@Produces({MediaType.APPLICATION_JSON})
public Response get(
@PathParam(PATH) @DefaultValue("UNKNOWN_" + PATH) final String path,
@QueryParam(OP) @DefaultValue("UNKNOWN_" + OP) final String op
) throws IOException {
LOG.info("get: " + PATH + "=" + path + ", " + OP + "=" + op);
final Map<String, Object> m = new TreeMap<>();
m.put(PATH, path);
m.put(OP, op);
final String js = JSON.toString(m);
return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
public String getNNDirectorySize() {
return JSON.toString(nameDirSizeMap);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
/**
* Returned information is a JSON representation of a map with
* volume name as the key and value is a map of volume attribute
* keys to its values
*/
@Override // DataNodeMXBean
public String getVolumeInfo() {
Preconditions.checkNotNull(data, "Storage not yet initialized");
return JSON.toString(data.getVolumeInfoMap());
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
@Override // DataNodeMXBean
public String getSlowDisks() {
if (diskMetrics == null) {
//Disk Stats not enabled
return null;
}
Set<String> slowDisks = diskMetrics.getDiskOutliersStats().keySet();
return JSON.toString(slowDisks);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
@Override
public String getSnapshotStats() {
Map<String, Object> info = new HashMap<String, Object>();
info.put("SnapshottableDirectories", this.getNumSnapshottableDirs());
info.put("Snapshots", this.getNumSnapshots());
return JSON.toString(info);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
@Override // NameNodeMxBean
public String getJournalTransactionInfo() {
Map<String, String> txnIdMap = new HashMap<String, String>();
txnIdMap.put("LastAppliedOrWrittenTxId",
Long.toString(this.getFSImage().getLastAppliedOrWrittenTxId()));
txnIdMap.put("MostRecentCheckpointTxId",
Long.toString(this.getFSImage().getMostRecentCheckpointTxId()));
return JSON.toString(txnIdMap);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
@Override // NameNodeMXBean
public String getCorruptFiles() {
List<String> list = new ArrayList<String>();
Collection<FSNamesystem.CorruptFileBlockInfo> corruptFileBlocks;
try {
corruptFileBlocks = listCorruptFileBlocks("/", null);
int corruptFileCount = corruptFileBlocks.size();
if (corruptFileCount != 0) {
for (FSNamesystem.CorruptFileBlockInfo c : corruptFileBlocks) {
list.add(c.toString());
}
}
} catch (StandbyException e) {
if (LOG.isDebugEnabled()) {
LOG.debug("Get corrupt file blocks returned error: " + e.getMessage());
}
} catch (IOException e) {
LOG.warn("Get corrupt file blocks returned error", e);
}
return JSON.toString(list);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
return JSON.toString(status);
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
/**
* Returned information is a JSON representation of an array,
* each element of the array is a map contains the information
* about a block pool service actor.
*/
@Override // DataNodeMXBean
public String getBPServiceActorInfo() {
final ArrayList<Map<String, String>> infoArray =
new ArrayList<Map<String, String>>();
for (BPOfferService bpos : blockPoolManager.getAllNamenodeThreads()) {
if (bpos != null) {
for (BPServiceActor actor : bpos.getBPServiceActors()) {
infoArray.add(actor.getActorInfoMap());
}
}
}
return JSON.toString(infoArray);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
/**
* Returned information is a JSON representation of a map with
* name node host name as the key and block pool Id as the value.
* Note that, if there are multiple NNs in an NA nameservice,
* a given block pool may be represented twice.
*/
@Override // DataNodeMXBean
public String getNamenodeAddresses() {
final Map<String, String> info = new HashMap<String, String>();
for (BPOfferService bpos : blockPoolManager.getAllNamenodeThreads()) {
if (bpos != null) {
for (BPServiceActor actor : bpos.getBPServiceActors()) {
info.put(actor.getNNSocketAddress().getHostName(),
bpos.getBlockPoolId());
}
}
}
return JSON.toString(info);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
/**
* Returned information is a JSON representation of map with host name as the
* key and value is a map of dead node attribute keys to its values
*/
@Override // NameNodeMXBean
public String getDeadNodes() {
final Map<String, Map<String, Object>> info =
new HashMap<String, Map<String, Object>>();
final List<DatanodeDescriptor> dead = new ArrayList<DatanodeDescriptor>();
blockManager.getDatanodeManager().fetchDatanodes(null, dead, false);
for (DatanodeDescriptor node : dead) {
Map<String, Object> innerinfo = ImmutableMap.<String, Object>builder()
.put("lastContact", getLastContact(node))
.put("decommissioned", node.isDecommissioned())
.put("adminState", node.getAdminState().toString())
.put("xferaddr", node.getXferAddr())
.build();
info.put(node.getHostName() + ":" + node.getXferPort(), innerinfo);
}
return JSON.toString(info);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
info.put("nodeUsage", innerInfo);
return JSON.toString(info);
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
return JSON.toString(jasList);
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
@Override // NameNodeMXBean
public String getNameDirStatuses() {
Map<String, Map<File, StorageDirType>> statusMap =
new HashMap<String, Map<File, StorageDirType>>();
Map<File, StorageDirType> activeDirs = new HashMap<File, StorageDirType>();
for (Iterator<StorageDirectory> it
= getFSImage().getStorage().dirIterator(); it.hasNext();) {
StorageDirectory st = it.next();
activeDirs.put(st.getRoot(), st.getStorageDirType());
}
statusMap.put("active", activeDirs);
List<Storage.StorageDirectory> removedStorageDirs
= getFSImage().getStorage().getRemovedStorageDirs();
Map<File, StorageDirType> failedDirs = new HashMap<File, StorageDirType>();
for (StorageDirectory st : removedStorageDirs) {
failedDirs.put(st.getRoot(), st.getStorageDirType());
}
statusMap.put("failed", failedDirs);
return JSON.toString(statusMap);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
/**
* Returned information is a JSON representation of map with host name as the
* key and value is a map of decommissioning node attribute keys to its
* values
*/
@Override // NameNodeMXBean
public String getDecomNodes() {
final Map<String, Map<String, Object>> info =
new HashMap<String, Map<String, Object>>();
final List<DatanodeDescriptor> decomNodeList = blockManager.getDatanodeManager(
).getDecommissioningNodes();
for (DatanodeDescriptor node : decomNodeList) {
Map<String, Object> innerinfo = ImmutableMap
.<String, Object> builder()
.put("xferaddr", node.getXferAddr())
.put("underReplicatedBlocks",
node.getLeavingServiceStatus().getUnderReplicatedBlocks())
.put("decommissionOnlyReplicas",
node.getLeavingServiceStatus().getOutOfServiceOnlyReplicas())
.put("underReplicateInOpenFiles",
node.getLeavingServiceStatus().getUnderReplicatedInOpenFiles())
.build();
info.put(node.getHostName() + ":" + node.getXferPort(), innerinfo);
}
return JSON.toString(info);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
/**
* Returned information is a JSON representation of map with host name of
* nodes entering maintenance as the key and value as a map of various node
* attributes to its values.
*/
@Override // NameNodeMXBean
public String getEnteringMaintenanceNodes() {
final Map<String, Map<String, Object>> nodesMap =
new HashMap<String, Map<String, Object>>();
final List<DatanodeDescriptor> enteringMaintenanceNodeList =
blockManager.getDatanodeManager().getEnteringMaintenanceNodes();
for (DatanodeDescriptor node : enteringMaintenanceNodeList) {
Map<String, Object> attrMap = ImmutableMap
.<String, Object> builder()
.put("xferaddr", node.getXferAddr())
.put("underReplicatedBlocks",
node.getLeavingServiceStatus().getUnderReplicatedBlocks())
.put("maintenanceOnlyReplicas",
node.getLeavingServiceStatus().getOutOfServiceOnlyReplicas())
.put("underReplicateInOpenFiles",
node.getLeavingServiceStatus().getUnderReplicatedInOpenFiles())
.build();
nodesMap.put(node.getHostName() + ":" + node.getXferPort(), attrMap);
}
return JSON.toString(nodesMap);
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
return JSON.toString(info);
代码示例来源:origin: tadglines/Socket.IO-Java
protected void writeData(ServletResponse response, String data) throws IOException {
idleCheck.activity();
response.getOutputStream().print("<script>parent.s._("+ JSON.toString(data) +", document);</script>");
response.flushBuffer();
}
代码示例来源:origin: tadglines/Socket.IO-Java
@Override
public void onConnect(SocketIOOutbound outbound) {
this.outbound = outbound;
connections.offer(this);
try {
outbound.sendMessage(SocketIOFrame.JSON_MESSAGE_TYPE, JSON.toString(
Collections.singletonMap("welcome", "Welcome to GWT Chat!")));
} catch (SocketIOException e) {
outbound.disconnect();
}
broadcast(SocketIOFrame.JSON_MESSAGE_TYPE, JSON.toString(
Collections.singletonMap("announcement", sessionId + " connected")));
}
代码示例来源:origin: org.scalatra.socketio-java/socketio-core
protected void writeData(ServletResponse response, String data) throws IOException {
try {
getIdleCheck().activity();
} catch (Exception e) {
Log.warn(e);
}
response.getOutputStream().print("<script>parent.s._("+ JSON.toString(data) +", document);</script>");
response.flushBuffer();
}
内容来源于网络,如有侵权,请联系作者删除!