Opened 12 years ago
Closed 12 years ago
#35902 closed defect (fixed)
hadoop is missing native libs
Reported by: | jeff@… | Owned by: | humem (humem) |
---|---|---|---|
Priority: | Normal | Milestone: | |
Component: | ports | Version: | 2.1.2 |
Keywords: | Cc: | ||
Port: | hadoop |
Description
when building hadoop from src you can invoke ant -Dcompile.native=true to build the shared libs and jni bindings required to use snappy and gzip file compression.
The current port is packaged with Linux shared libs
Change History (9)
comment:1 Changed 12 years ago by mf2k (Frank Schima)
Keywords: | native hadoop removed |
---|---|
Owner: | changed from macports-tickets@… to hum@… |
comment:2 follow-up: 3 Changed 12 years ago by humem (humem)
Status: | new → assigned |
---|
Thanks for your report. I changed the Portfile to fetch the source tarball, and the native libraries are built and installed into ${prefix}/lib. I added a patchfile and dependency descriptions. Committed in r97460. Could you please check the revised port?
comment:3 follow-up: 4 Changed 12 years ago by jeff@…
Replying to hum@…:
Thanks for your report. I changed the Portfile to fetch the source tarball, and the native libraries are built and installed into ${prefix}/lib. I added a patchfile and dependency descriptions. Committed in r97460. Could you please check the revised port?
I am not familiar with the process required to test the changes you committed. Is there a doc somewhere that explains how to pull the changes, where to put them etc?
comment:4 follow-up: 5 Changed 12 years ago by jeff@…
Replying to jeff@…:
Replying to hum@…:
Thanks for your report. I changed the Portfile to fetch the source tarball, and the native libraries are built and installed into ${prefix}/lib. I added a patchfile and dependency descriptions. Committed in r97460. Could you please check the revised port?
I am not familiar with the process required to test the changes you committed. Is there a doc somewhere that explains how to pull the changes, where to put them etc?
I figured out how to test it and it works great.
There is another native lib for using fuse to mount hdfs that I just found a link on how to compile it. It would be great if you could get that working also. Here is a link to a doc decribing how to build the hdfs shared lib http://www.quora.com/How-can-I-get-proper-native-libraries-for-Hadoop-0-21-0-to-be-able-to-mount-HDFS-using-fuse-dfs
comment:5 follow-up: 6 Changed 12 years ago by humem (humem)
Replying to jeff@…:
There is another native lib for using fuse to mount hdfs that I just found a link on how to compile it. It would be great if you could get that working also. Here is a link to a doc decribing how to build the hdfs shared lib http://www.quora.com/How-can-I-get-proper-native-libraries-for-Hadoop-0-21-0-to-be-able-to-mount-HDFS-using-fuse-dfs
Very nice information! I updated the port to build libhdfs and added fusedfs variant to build contrib/fuse-dfs. Committed in r97554. It is helpful to me if you could check the port with Fuse-DFS as following:
$ sudo port selfupdate $ sudo port install hadoop +fusedfs ...
fuse_dfs and fuse_dfs_wrapper.sh are located in ${prefix}/bin and ${prefix}/share/java/hadoop-${version}/bin respectively.
comment:6 follow-up: 7 Changed 12 years ago by jeff@…
Replying to hum@…:
Very nice information! I updated the port to build libhdfs and added fusedfs variant to build contrib/fuse-dfs. Committed in r97554. It is helpful to me if you could check the port with Fuse-DFS as following:
$ sudo port selfupdate $ sudo port install hadoop +fusedfs ...fuse_dfs and fuse_dfs_wrapper.sh are located in ${prefix}/bin and ${prefix}/share/java/hadoop-${version}/bin respectively.
umount /Volumes/hc3; mkdir /Volumes/hc3;fuse_dfs_wrapper.sh dfs://cmc6-101.alpha.farecompare.com:8200 /Volumes/hc3/ -d umount: /Volumes/hc3: not currently mounted port=8200,server=cmc6-101.alpha.farecompare.com fuse-dfs didn't recognize /Volumes/hc3/,-2 fuse-dfs ignoring option -d FUSE library version: 2.8.7 nullpath_ok: 0 unique: 0, opcode: INIT (26), nodeid: 0, insize: 56 INIT: 7.12 flags=0x00000000 max_readahead=0x20000000 INIT: 7.12 flags=0x00000010 max_readahead=0x20000000 max_write=0x02000000 unique: 0, success, outsize: 40 unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40 statfs / unique: 1, opcode: STATFS (17), nodeid: 1, insize: 40 statfs / unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40 statfs / unique: 3, opcode: STATFS (17), nodeid: 1, insize: 40 statfs / unique: 4, opcode: STATFS (17), nodeid: 1, insize: 40 statfs / unique: 5, opcode: STATFS (17), nodeid: 1, insize: 40 statfs / unique: 6, opcode: GETATTR (3), nodeid: 1, insize: 56 getattr / dyld: lazy symbol binding failed: Symbol not found: _JNI_GetCreatedJavaVMs Referenced from: /opt/local/lib/libhdfs.dylib Expected in: flat namespace dyld: Symbol not found: _JNI_GetCreatedJavaVMs Referenced from: /opt/local/lib/libhdfs.dylib Expected in: flat namespace /Users/jeffkreska/hadoop-dist/bin/fuse_dfs_wrapper.sh: line 29: 81221 Trace/BPT trap: 5 /opt/local/bin/fuse_dfs $@
comment:7 follow-up: 8 Changed 12 years ago by humem (humem)
Replying to jeff@…:
umount /Volumes/hc3; mkdir /Volumes/hc3;fuse_dfs_wrapper.sh dfs://cmc6-101.alpha.farecompare.com:8200 /Volumes/hc3/ -d
...
dyld: lazy symbol binding failed: Symbol not found: _JNI_GetCreatedJavaVMs
Referenced from: /opt/local/lib/libhdfs.dylib Expected in: flat namespace
I fixed the port to build libraries linking with Java and other depended libraries. The port now supports a universal build. Committed in r97746.
comment:8 Changed 12 years ago by jeff@…
Replying to hum@…:
Replying to jeff@…:
umount /Volumes/hc3; mkdir /Volumes/hc3;fuse_dfs_wrapper.sh dfs://cmc6-101.alpha.farecompare.com:8200 /Volumes/hc3/ -d
...
dyld: lazy symbol binding failed: Symbol not found: _JNI_GetCreatedJavaVMs
Referenced from: /opt/local/lib/libhdfs.dylib Expected in: flat namespace
I fixed the port to build libraries linking with Java and other depended libraries. The port now supports a universal build. Committed in r97746.
It links fine but I am not able to see any files on the mount. It could be the fact that my hadoop cluster is running hadoop-0.20.2 not 1.0.3. I will know for sure when the port for hadoop-0.20.2-cdh3u5 is completed and I am comparing apples to apples.
comment:9 Changed 12 years ago by humem (humem)
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
In the future, please Cc the port maintainer(s).