Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with pail consolidation on HDFS #49

Open
juliankeppel opened this issue Mar 31, 2015 · 2 comments
Open

Problem with pail consolidation on HDFS #49

juliankeppel opened this issue Mar 31, 2015 · 2 comments

Comments

@juliankeppel
Copy link

I'm reading "Big Data" very interested and are trying to implement the batch layer with graph data model as master dataset.

I always get a NullPointerException when I call the consolidate method on a pail.

For example, I want to absorb a pail with new incoming data from the pail containing my master dataset. Then I want to consolidate the master pail to avoid lots of small files.

Example code snippet:

TypedRecordOutputStream dataOutputStream = incomingPail.openWrite();
for (Data thriftData : objects) {
dataOutputStream.writeObject(thriftData);
}
dataOutputStream.close();

masterPail.absorb(incomingPail);
masterPail.consolidate();

I know that this is the "stupid" approach to ingest new data to the master dataset (without snapshot or the like). But for the moment this is a sufficient solution for me.

Thanks for your help.

Error:

Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:656)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:444)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:293)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at com.backtype.hadoop.Consolidator.consolidate(Consolidator.java:106)
at com.backtype.hadoop.pail.Pail.consolidate(Pail.java:538)
at com.backtype.hadoop.pail.Pail.consolidate(Pail.java:509)
at hdfsconnector.PailHDFSConnector.appendMasterDataset(PailHDFSConnector.java:107)
at LambdaStarter.main(LambdaStarter.java:40)

@Bruce-Du
Copy link

I met the same error with you,did you fix it now?

@juliankeppel
Copy link
Author

To be honest: I didn't follow up pail for this project. I implemented the
data model with Hive tables and a Data Vault-like approach at the end.
Am 30.01.2016 4:26 nachm. schrieb "DataNerd" [email protected]:

I met the same error with you,did you fix it now?


Reply to this email directly or view it on GitHub
#49 (comment)
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants