Hadoop

[Hadoop][AWS] Mapreduce WordCount 실행 시, os::commit_memory(0x00000000f660c000, 104861696, 0) failed; error='Cannot allocate memory' (errno=12) 에러 발생 시 해결 방법

SDeveloper 2020. 3. 29. 12:43
반응형

 

Hadoop 스터디를 위해 설치한 AWS 서버가 free teer 버전이라 메모리가 1G 였다.

따라서 Hadoop 설치 후, Mapreduce의 Wordcount를 실행하려하니 아래와 같은 에러가 발생했다.

 

1. 에러

 

 

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f660c000, 104861696, 0) failed; error='Cannot allocate memory' (errno=12)

# There is insufficient memory for the Java Runtime Environment to continue.

# Native memory allocation (mmap) failed to map 104861696 bytes for committing reserved memory.

# An error report file with more information is saved as:

# /home/hadoop/hadoop/hs_err_pid3970.log[hadoop@ip-172-31-33-183 hadoop]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar wordcount input output
2020-03-27 16:07:02,698 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-03-27 16:07:03,994 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2020-03-27 16:07:04,211 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2020-03-27 16:07:04,211 INFO impl.MetricsSystemImpl: JobTracker metrics system started
2020-03-27 16:07:04,689 INFO input.FileInputFormat: Total input files to process : 1
2020-03-27 16:07:04,752 INFO mapreduce.JobSubmitter: number of splits:1
2020-03-27 16:07:05,043 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local266812600_0001
2020-03-27 16:07:05,059 INFO mapreduce.JobSubmitter: Executing with tokens: []
2020-03-27 16:07:05,341 INFO 
mapreduce.Job: The url to track the job: http://localhost:8080/
2020-03-27 16:07:05,342 INFO 
mapreduce.Job: Running job: job_local266812600_0001
2020-03-27 16:07:05,358 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2020-03-27 16:07:05,378 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2020-03-27 16:07:05,378 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2020-03-27 16:07:05,379 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2020-03-27 16:07:05,471 INFO mapred.LocalJobRunner: Waiting for map tasks
2020-03-27 16:07:05,472 INFO mapred.LocalJobRunner: Starting task: attempt_local266812600_0001_m_000000_0
2020-03-27 16:07:05,496 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2020-03-27 16:07:05,496 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2020-03-27 16:07:05,527 INFO 
mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2020-03-27 16:07:05,530 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/
LICENSE.txt:0+147145
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f660c000, 104861696, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 104861696 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/hadoop/hadoop/
hs_err_pid3970.log

 

 

2. 해결 방법

Swap 메모리를 추가해주기로 했다.

 

https://aws.amazon.com/ko/premiumsupport/knowledge-center/ec2-memory-swap-file/

sudo dd if=/dev/zero of=/swapfile bs=128M count=32

sudo chmod 600 /swapfile

sudo mkswap /swapfile

sudo swapon /swapfile

sudo swapon -s

sudo vi /etc/fstab

[추가]

/swapfile swap swap defaults 0 0

 

free -h 로 메모리가 증가되었는지 확인해보자.

 

 

[추가 전]

 

[추가 후]

 

이 후 wordcount를 실행해보면 정상 종료 되는 것을 확인 할 수 있다.

 

 

* 'java.net.ConnectException: Call From ip-172-31-33-183.ap-northeast-2.compute.internal/172.31 .33.183 to localhost:9000 failed on connection exception' 에러가 발생할 경우

 

[hadoop@ip-172-31-33-183 hadoop]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar wordcount input output

2020-03-27 16:39:39,705 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

2020-03-27 16:39:41,118 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2020-03-27 16:39:41,281 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).

2020-03-27 16:39:41,281 INFO impl.MetricsSystemImpl: JobTracker metrics system started

java.net.ConnectException: Call From ip-172-31-33-183.ap-northeast-2.compute.internal/172.31 .33.183 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)

 

 

원인은 아까 swap 메모리 설치 시, exit를 통해 job을 비정상 종료했기 때문이다.

정상적으로 모든 job이 떠있는지 jps를 통해 확인해보자.

 

 

start-all.sh

 

startp-all.sh를 통해 namenode가 start되는 것을 확인한 후, 실행하면 정상 실행 된다.

 

 

 

 

 

반응형