本文整理了Java中org.apache.samza.Partition.<init>()
方法的一些代码示例,展示了Partition.<init>()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Partition.<init>()
方法的具体详情如下:
包路径:org.apache.samza.Partition
类名称:Partition
方法名:<init>
[英]Constructs a new Samza stream partition from a specified partition number.
[中]从指定的分区号构造一个新的Samza流分区。
代码示例来源:origin: apache/samza
@Override
public Object deserializeKey(String sspString, DeserializationContext ctxt) throws IOException {
int idx = sspString.indexOf('.');
int lastIdx = sspString.lastIndexOf('.');
if (idx < 0 || lastIdx < 0) {
throw new IllegalArgumentException("System stream partition expected in format 'system.stream.partition");
}
return new SystemStreamPartition(
new SystemStream(sspString.substring(0, idx), sspString.substring(idx + 1, lastIdx)),
new Partition(Integer.parseInt(sspString.substring(lastIdx + 1))));
}
}
代码示例来源:origin: apache/samza
public CoordinatorStreamSystemConsumer(SystemStream coordinatorSystemStream, SystemConsumer systemConsumer, SystemAdmin systemAdmin) {
this.coordinatorSystemStreamPartition = new SystemStreamPartition(coordinatorSystemStream, new Partition(0));
this.systemConsumer = systemConsumer;
this.systemAdmin = systemAdmin;
this.configMap = new HashMap<>();
this.isBootstrapped = false;
this.keySerde = new JsonSerde<>();
this.messageSerde = new JsonSerde<>();
}
代码示例来源:origin: apache/samza
@Override
public SystemStreamPartition getPreviousSSP(SystemStreamPartition currentSystemStreamPartition, int previousPartitionCount, int currentPartitionCount) {
Preconditions.checkNotNull(currentSystemStreamPartition);
Preconditions.checkArgument(currentPartitionCount % previousPartitionCount == 0,
String.format("New partition count: %d should be a multiple of previous partition count: %d.", currentPartitionCount, previousPartitionCount));
Partition partition = currentSystemStreamPartition.getPartition();
Preconditions.checkNotNull(partition, String.format("SystemStreamPartition: %s cannot have null partition", currentSystemStreamPartition));
int currentPartitionId = partition.getPartitionId();
int previousPartitionId = currentPartitionId % previousPartitionCount;
return new SystemStreamPartition(currentSystemStreamPartition.getSystemStream(), new Partition(previousPartitionId));
}
}
代码示例来源:origin: apache/samza
public static TaskModel getTaskModel(int partitionId) {
return new TaskModel(getTaskName(partitionId),
new HashSet<>(
Arrays.asList(new SystemStreamPartition[]{new SystemStreamPartition("System", "Stream", new Partition(partitionId))})),
new Partition(partitionId));
}
代码示例来源:origin: apache/samza
public void register() {
SystemStreamPartition ssp = new SystemStreamPartition(stream, new Partition(0));
consumer.register(ssp, "");
isRegistered = true;
}
代码示例来源:origin: apache/samza
@Override
public SystemStreamPartition deserialize(JsonParser jsonParser, DeserializationContext context) throws IOException, JsonProcessingException {
ObjectCodec oc = jsonParser.getCodec();
JsonNode node = oc.readTree(jsonParser);
String system = node.get("system").getTextValue();
String stream = node.get("stream").getTextValue();
Partition partition = new Partition(node.get("partition").getIntValue());
return new SystemStreamPartition(system, stream, partition);
}
}
代码示例来源:origin: apache/samza
@Override
public Map<String, SystemStreamMetadata> getSystemStreamMetadata(Set<String> streamNames) {
return streamNames.stream()
.collect(Collectors.toMap(Function.identity(), streamName -> new SystemStreamMetadata(streamName,
Collections.singletonMap(new Partition(0),
new SystemStreamMetadata.SystemStreamPartitionMetadata(null, END_OF_STREAM_OFFSET, null)))));
}
代码示例来源:origin: apache/samza
@Test
public void testKeyOverriding() {
Map<Partition, List<String>> input = new HashMap<>();
input.put(new Partition(0), Collections.singletonList("path_0"));
input.put(new Partition(0), Collections.singletonList("new_path_0"));
String json = PartitionDescriptorUtil.getJsonFromDescriptorMap(input);
Map<Partition, List<String>> output = PartitionDescriptorUtil.getDescriptorMapFromJson(json);
Assert.assertEquals(1, output.entrySet().size());
Assert.assertTrue(output.containsKey(new Partition(0)));
Assert.assertEquals("new_path_0", output.get(new Partition(0)).get(0));
}
代码示例来源:origin: apache/samza
@Override
public Map<String, SystemStreamMetadata> getSystemStreamMetadata(Set<String> streamNames) {
Map<String, SystemStreamMetadata> map = new HashMap<>();
for (String stream : streamNames) {
Map<Partition, SystemStreamMetadata.SystemStreamPartitionMetadata> m = new HashMap<>();
for (int i = 0; i < streamToPartitions.get(stream); i++) {
m.put(new Partition(i), new SystemStreamMetadata.SystemStreamPartitionMetadata("", "", ""));
}
map.put(stream, new SystemStreamMetadata(stream, m));
}
return map;
}
代码示例来源:origin: apache/samza
/**
* Create a new {@link SystemAdmin} that returns the provided oldest and newest offsets for its topics
*/
private SystemAdmin newAdmin(String oldestOffset, String newestOffset) {
SystemStreamMetadata checkpointTopicMetadata = new SystemStreamMetadata(CHECKPOINT_TOPIC,
ImmutableMap.of(new Partition(0), new SystemStreamPartitionMetadata(oldestOffset,
newestOffset, Integer.toString(Integer.parseInt(newestOffset) + 1))));
SystemAdmin mockAdmin = mock(SystemAdmin.class);
when(mockAdmin.getSystemStreamMetadata(Collections.singleton(CHECKPOINT_TOPIC))).thenReturn(
ImmutableMap.of(CHECKPOINT_TOPIC, checkpointTopicMetadata));
return mockAdmin;
}
代码示例来源:origin: apache/samza
@Test
public void testGetSSPMetadataZeroUpcomingOffset() {
SystemStreamPartition ssp = new SystemStreamPartition(TEST_SYSTEM, VALID_TOPIC, new Partition(0));
TopicPartition topicPartition = new TopicPartition(VALID_TOPIC, 0);
when(mockKafkaConsumer.beginningOffsets(ImmutableList.of(topicPartition))).thenReturn(
ImmutableMap.of(topicPartition, -1L));
when(mockKafkaConsumer.endOffsets(ImmutableList.of(topicPartition))).thenReturn(
ImmutableMap.of(topicPartition, 0L));
Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata> expected =
ImmutableMap.of(ssp, new SystemStreamMetadata.SystemStreamPartitionMetadata("0", null, "0"));
assertEquals(kafkaSystemAdmin.getSSPMetadata(ImmutableSet.of(ssp)), expected);
}
代码示例来源:origin: apache/samza
@Test(expected = SamzaException.class)
public void testInvalidPartitionDescriptor() {
SystemStreamPartition ssp = new SystemStreamPartition("hdfs", "testStream", new Partition(0));
new MultiFileHdfsReader(HdfsReaderFactory.ReaderType.AVRO, ssp,
new ArrayList<>(), "0:0");
Assert.fail();
}
代码示例来源:origin: apache/samza
@Test
public void testSerializeTaskModel() throws IOException {
TaskModel taskModel = new TaskModel(new TaskName("Standby Partition 0"), new HashSet<>(), new Partition(0),
TaskMode.Standby);
String serializedString = this.samzaObjectMapper.writeValueAsString(taskModel);
TaskModel deserializedTaskModel = this.samzaObjectMapper.readValue(serializedString, TaskModel.class);
assertEquals(taskModel, deserializedTaskModel);
String sampleSerializedString = "{\"task-name\":\"Partition 0\",\"system-stream-partitions\":[],\"changelog-partition\":0}";
deserializedTaskModel = this.samzaObjectMapper.readValue(sampleSerializedString, TaskModel.class);
taskModel = new TaskModel(new TaskName("Partition 0"), new HashSet<>(), new Partition(0), TaskMode.Active);
assertEquals(taskModel, deserializedTaskModel);
}
代码示例来源:origin: apache/samza
@Test(expected = SamzaException.class)
public void testOutOfRangeFileIndex() {
SystemStreamPartition ssp = new SystemStreamPartition("hdfs", "testStream", new Partition(0));
new MultiFileHdfsReader(HdfsReaderFactory.ReaderType.AVRO, ssp,
Arrays.asList(descriptors), "3:0");
Assert.fail();
}
代码示例来源:origin: apache/samza
/**
* Given an SSP and offset, setStartingOffset should delegate to the offset manager.
*/
@Test
public void testSetStartingOffset() {
SystemStreamPartition ssp = new SystemStreamPartition("mySystem", "myStream", new Partition(0));
taskContext.setStartingOffset(ssp, "123");
verify(offsetManager).setStartingOffset(TASK_NAME, ssp, "123");
}
代码示例来源:origin: apache/samza
@Test
public void testEndOfStreamMessage() {
EndOfStreamMessage eos = new EndOfStreamMessage("test-task");
produceMessages(eos);
Set<SystemStreamPartition> sspsToPoll = IntStream.range(0, PARTITION_COUNT)
.mapToObj(partition -> new SystemStreamPartition(SYSTEM_STREAM, new Partition(partition)))
.collect(Collectors.toSet());
List<IncomingMessageEnvelope> results = consumeRawMessages(sspsToPoll);
assertEquals(1, results.size());
assertTrue(results.get(0).isEndOfStream());
}
代码示例来源:origin: apache/samza
@Test
public void testStartpointLatest() {
StartpointUpcoming startpoint = new StartpointUpcoming();
Assert.assertTrue(startpoint.getCreationTimestamp() <= Instant.now().toEpochMilli());
MockStartpointVisitor mockStartpointVisitorConsumer = new MockStartpointVisitor();
startpoint.apply(new SystemStreamPartition("sys", "stream", new Partition(1)), mockStartpointVisitorConsumer);
Assert.assertEquals(StartpointUpcoming.class, mockStartpointVisitorConsumer.visitedClass);
}
代码示例来源:origin: apache/samza
@Test
public void testStartpointEarliest() {
StartpointOldest startpoint = new StartpointOldest();
Assert.assertTrue(startpoint.getCreationTimestamp() <= Instant.now().toEpochMilli());
MockStartpointVisitor mockStartpointVisitorConsumer = new MockStartpointVisitor();
startpoint.apply(new SystemStreamPartition("sys", "stream", new Partition(1)), mockStartpointVisitorConsumer);
Assert.assertEquals(StartpointOldest.class, mockStartpointVisitorConsumer.visitedClass);
}
代码示例来源:origin: apache/samza
@Test
public void testStartpointTimestamp() {
StartpointTimestamp startpoint = new StartpointTimestamp(2222222L);
Assert.assertEquals(2222222L, startpoint.getTimestampOffset().longValue());
Assert.assertTrue(startpoint.getCreationTimestamp() <= Instant.now().toEpochMilli());
MockStartpointVisitor mockStartpointVisitorConsumer = new MockStartpointVisitor();
startpoint.apply(new SystemStreamPartition("sys", "stream", new Partition(1)), mockStartpointVisitorConsumer);
Assert.assertEquals(StartpointTimestamp.class, mockStartpointVisitorConsumer.visitedClass);
}
代码示例来源:origin: apache/samza
@Test
public void testStartpointSpecific() {
StartpointSpecific startpoint = new StartpointSpecific("123");
Assert.assertEquals("123", startpoint.getSpecificOffset());
Assert.assertTrue(startpoint.getCreationTimestamp() <= Instant.now().toEpochMilli());
MockStartpointVisitor mockStartpointVisitorConsumer = new MockStartpointVisitor();
startpoint.apply(new SystemStreamPartition("sys", "stream", new Partition(1)), mockStartpointVisitorConsumer);
Assert.assertEquals(StartpointSpecific.class, mockStartpointVisitorConsumer.visitedClass);
}
内容来源于网络,如有侵权,请联系作者删除!