java—无法将“写入”持久化到hdfs中的特定数据节点

chy5wohz  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(367)

我正在尝试创建一个索引策略,它要求索引块与数据块位于同一个datanode上,以减少数据检索时的延迟。我已经成功地编写了读取与特定文件相关的数据块的代码。对于写入,我打开一个到特定datanode的套接字连接,写入数据,然后关闭套接字。不幸的是,我不确定使用此方法写入数据的“位置”或“方式”,因为当我使用 hadoop fs -ls 我看不到我的数据写在任何地方(在一些散乱的文件可能?!),但是我的程序没有任何错误。
这是我的密码:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DFSClient;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.server.datanode.DataNode;
import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;

import java.io.BufferedOutputStream;
import java.io.DataOutputStream;
import java.io.OutputStream;
import java.io.RandomAccessFile;
import java.net.InetSocketAddress;
import java.net.Socket;
import java.nio.channels.FileChannel;
import java.nio.file.OpenOption;
import java.nio.file.StandardOpenOption;
import java.util.Random;
import java.nio.ByteBuffer;

import javax.net.SocketFactory;

import org.apache.hadoop.security. UserGroupInformation;

public class CopyOfChunkedIndexes {

    public static void main(String[] args) throws Exception {
        if (args.length != 1) {
            System.err.println("Usage: ChunkedIndexes <input path>");
            System.exit(-1);
        }

        Configuration conf = new Configuration();
        conf.set("fs.default.name", "hdfs://localhost:9000");               //for defaulting to HDFS rather than local filesystem
        conf.set("hadoop.security.authentication", "simple");               //disable authentication
        conf.set("hadoop.security.authorization", "false");                 //disable authorization

        Job job = Job.getInstance(conf, "Chunked Indexes");
        job.setJarByClass(CopyOfChunkedIndexes.class);

        Path inputPath = new Path("/user/hadoop-user/sample.txt");
        FileInputFormat.setInputPaths(job, inputPath);

        try{
            FileSystem fs = FileSystem.get(conf);
            DistributedFileSystem dfs = (DistributedFileSystem) fs;
            DFSClient dfsclient = dfs.getClient();

            System.out.println("Proceeding for file: " + inputPath.toString());

            FileStatus fileStatus = fs.getFileStatus(inputPath);
            BlockLocation[] bLocations = fs.getFileBlockLocations(inputPath, 0, fileStatus.getLen());

            for(int i = 0; i < bLocations.length; i++)
            {
                System.out.println("Block[" + + i + "]::");
                System.out.println("\nHost(s): ");

                String[] temp = bLocations[i].getHosts();
                for(int j = 0; j < temp.length; j++)
                {
                    System.out.println(temp[j] + "\t");
                }

                System.out.println("\nBlock length: " + bLocations[i].getLength() +
                                    "\n\nDataNode(s) hosting this block: ");

                temp = bLocations[i].getNames();
                for(int j = 0; j < temp.length; j++)
                {
                    System.out.println(temp[j] + "\t");
                }

                System.out.println("\nOffset: " + bLocations[i].getOffset());

                //READING A BLOCK
                FSDataInputStream in = fs.open(inputPath);
                in.seek(bLocations[i].getOffset());

                byte[] buf = new byte[(int)bLocations[i].getLength()];
                in.read(buf, (int)bLocations[i].getOffset(), (int)bLocations[i].getLength());
                in.close();

                System.out.println(new String(buf, "UTF-8"));
                System.out.println("--------------------------------------------------------------------------------------------");
            }

            //WRITE A FILE TO A SPECIFIC DATANODE
            for(int i = 0; i < bLocations.length; i++)
            {
                System.out.println("Block[" + + i + "]::");
                String[] temp;

                System.out.println("\n\nDataNode(s) hosting this block: ");                        //Name(s) = datanode addresses

                temp = bLocations[i].getNames();
                for(int j = 0; j < temp.length; j++)
                {
                    System.out.println(temp[j].split(":")[0] + "\t" + temp[j].split(":")[1]);      //host vs. port
                }

                Socket sock = SocketFactory.getDefault().createSocket();
                InetSocketAddress targetAddr = new InetSocketAddress(temp[0].split(":")[0], Integer.parseInt(temp[0].split(":")[1]));
                NetUtils.connect(sock, targetAddr, 10000);
                sock.setSoTimeout(10000);

                OutputStream baseStream = NetUtils.getOutputStream(sock, 10000);
                DataOutputStream oStream = new DataOutputStream(new BufferedOutputStream(baseStream, 10000));
                oStream.writeBytes("-----------------------------------------Sample text-----------------------------------------------");

                sock.close();
                System.out.println("Data written, socket closed!");
            }

        }catch(Exception ex){
            ex.printStackTrace();
        }

    }
}

如果你能帮我解决问题,我将不胜感激!谢谢!
[ps:我正在linux虚拟机上使用Hadoop2.2.0。我在上面的代码中禁用了授权/身份验证,因为我希望直接访问datanode(不需要“开销”身份验证),因为这是为了测试目的。]

sq1bmfud

sq1bmfud1#

所有编辑都被集群丢弃,因为您没有通过namenode。所有的修改都被视为文件损坏。
hadoop已经为您完成了这项工作:当您希望在hadoop集群上执行分布式任务时,将为任务加载最近的数据。例如,如果您有一个elasticsearch集群和一个hadoop集群共享相同的硬件,那么您只需创建一个mapreduce任务,该任务将使用本地elasticsearch节点,仅此而已:对于您的数据没有网络舞蹈,所有任务都将加载一部分数据集并将它们推送到本地elasticsearch示例。
好好享受!

ggazkfy8

ggazkfy82#

有一门课叫 BlockPlacementPolicy ,这(理论上)可以扩展为定制hdfs如何选择数据节点。尽管这种方法很老套,但对于那些希望在将来做类似事情的人来说,这种方法可能会奏效,并偶然发现了这个问题。

相关问题