-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with pail consolidation on HDFS #49
Comments
I met the same error with you,did you fix it now? |
To be honest: I didn't follow up pail for this project. I implemented the
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm reading "Big Data" very interested and are trying to implement the batch layer with graph data model as master dataset.
I always get a NullPointerException when I call the consolidate method on a pail.
For example, I want to absorb a pail with new incoming data from the pail containing my master dataset. Then I want to consolidate the master pail to avoid lots of small files.
Example code snippet:
TypedRecordOutputStream dataOutputStream = incomingPail.openWrite();
for (Data thriftData : objects) {
dataOutputStream.writeObject(thriftData);
}
dataOutputStream.close();
masterPail.absorb(incomingPail);
masterPail.consolidate();
I know that this is the "stupid" approach to ingest new data to the master dataset (without snapshot or the like). But for the moment this is a sufficient solution for me.
Thanks for your help.
Error:
Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:656)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:444)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:293)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at com.backtype.hadoop.Consolidator.consolidate(Consolidator.java:106)
at com.backtype.hadoop.pail.Pail.consolidate(Pail.java:538)
at com.backtype.hadoop.pail.Pail.consolidate(Pail.java:509)
at hdfsconnector.PailHDFSConnector.appendMasterDataset(PailHDFSConnector.java:107)
at LambdaStarter.main(LambdaStarter.java:40)
The text was updated successfully, but these errors were encountered: