Integrating LVM with Hadoop


 Integrating LVM with Hadoop

These are the steps that are to be followed on the data node.

1. Firstly, add physical hard disks to the data node. Here I have added two hard disks.

2. Convert the hard disks to physical volume as volume groups can only be created with physical volumes.


3. We can see the info of physical volumes by using the command "pvdisplay"


4. Create a volume group with the above physical volumes. 
#vgcreate vg_name /disk-name1 /disk_name2... /disk_namen

5. We can use vgdisplay and vg-group_name to get the info about the volume group.


6. Create a partition in the logical volume of size that you want to contribute to the namenode. The command is "lvcreate --size <value> --name <value> vg_name"

7. Now format the partition of the logical volume.

8. Create directory you want to contribute to the namenode and mount the above logical volume to the directory.

9. Use "df -h" command to check if the logical volume is mounted to the desired directory.



10. Now, start the datanode and connect to the namenode and check the volume contributed to the namenode.

11. We can extend the size of datanode by extending the logical volume.
Use the command "lvextend --size <value> /dev/vg_name/lv_name"
And then format only the resized/added part by using the command 
"resize2fs /dev/vg_name/lv_name"

As you can see, I have increased my logical volume storage without unmounting or stopping the services.

ARTH- The School of Technologies
ARTH2020.18.13

Sai Kishen Kothapalli
Tamanna Verma
Vaibhav Maan
Rahul Kumar

Comments

Popular posts from this blog

AWS SQS

Automation of Technology using Python-Script

Increasing or decreasing static partition size in Linux