Category Archives: File System

Creating and mounting a partition lager than 2 TB in Linux

Why can I only use 2 TB (TeraByte) of my 2+ TB drive in linux?

The answer is really simple. I guess you formatted the drive using “fdisk” which use a ms-dos partition table.
MS-dos partition table (MBR) is 32 bit, and thereby it can’t handle anything above 2 TeraBytes

How to create a partition above 2TB in linux

To fully use your 2+TB harddrive, you have to use a partition table that supports it. We now know that the MS-DOS partition table (MBR) does not, so what should you use instead? GPT.

GPT supports up to 9.4 ZB (ZetaByte). That’s 9895604649984 GB (GigaByte)!. It’s pretty safe to say, that you will not hit this limit in the near future.

Requirements

First make sure that you have a backup of all your data thats on the harddrive, somewhere else. Because what you have to do WILL wipe the drive clean, and delete everything on it!

You need a program called “parted”. It’s the easiest way to create the partition table and create a partition of the desired size. It might be installed already on your Linux system. If not then you have to install it using your favorite package manager.
In debian that would be:

apt-get install parted

In CentOS that would be:

yum install parted

Identify your harddrive

You have to find out what your harddrives name is.
In Linux, every harddrive get’s a name. This can be “sda”, “hda” or something similar depending on what kind of interface they are connected to.

Find your drive names using the following tools:
lsblk
fdisk

Run the command:

lsblk

And you should see something like the following:
lsblk

You see how there is a “sda” and a “sdb” ?

If you notice the nice treeview lsblk makes, you will see that below “sda” there is “sda1” and “sda2”. These are partitions on disk “sda”
sdb has a partition too, because I already formatted my harddrive to 4TB, but before I formatted it with a GPT partition table, “sdb1” only had 1.8TB under the column “SIZE” which is the same problem you have.

Now we need to find the maximum size of your harddrive. You can do this by using the program “fdisk” with the following syntax:

fdisk -l /dev/<your harddrive name here>

In my example it would be:

fdisk -l /dev/sdb

You will get an output that looks something similar to this, it’s the top line marked with the number of GB we are interested in:

Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table

The “4000.8 GB” is the size of your harddrive, and this will be the max size of any partition. note this number, you will need it later.

Creating the partition table

Run the program you installed earlier in this guide called “parted” by running the following command:

parted /dev/<your harddrive name here>

In my case, the example would be:

parted /dev/sdb

The output will look like this:

GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)

Create the GPT partition table

Type in the following and hit enter to create a GPT partition table:

mklabel gpt

The output will look like this, after you types “yes” and pressed enter to the warning stating that this will delete all files on the harddrive:

Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted)

Set the unit to GB

Now set the unit to GB to make it easy to specify the partition size later on. Type the following and hit enter. This will not give any output:

unit GB

Create the partition

To create the partitoion, you simply type the following syntax and press enter:

mkpart primary <from GB> <to GB>

In my example with the 4TB (4000GB) harddrive, this would be:

mkpart primary 0.0GB 4000.8GB

The above command will not give you any output.

View the partition table

To view the partition table and partition you just created, type the following and press enter:

print

Sample output:

Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      0.00GB  4001GB  4001GB  ext4         primary

You see the partition and the size you just created.

Now type the following and hit enter to save and quit parted:

quit

Create the filesystem

There are many filesystems to choose from. I prefer ext4 and that is what I will use in this guide. For general purpose, ext4 should be just fine. If you need to use the harddrive for something special, then please read up on what filesystem would be best for you and use that instead.

If you run the “lsblk” command again like earlier in this guide, you should now have a “1″ partition like I did in my example: “sdb1”

Creating a ext4 filesystem takes some time. my 4TB disk took ~40 minutes to format.

Create the ext4 filesystem using the following command:

mkfs.ext4 /dev/<your drivename and partition number here>

In my example it would be:

mkfs.ext4 /dev/sdb1

Wait for it to finish creating all the inodes, the finished output should look something like this:

mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
183148544 inodes, 732566272 blocks
36628313 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
22357 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
	102400000, 214990848, 512000000, 550731776, 644972544
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Add the partition to fstab and mounting it

Adding the partition to fstab will make it auto remount upon reboot of the computer/server.

But first create the folder where you want the partition to be mounted. In my example I will use /store:

mkdir /store

open the file /etc/fstab in your favorite text editor and add the following line at the bottom:

/dev/<your drivename and partition number>               /store                  <your filesystem>    defaults        0 0

In my example it would be:

/dev/sdb1              /store                  ext4    defaults        0 0

Mount the partition

Mounting the partition is now as easy as typing the following command:

mount /store

Because you added the mount point to fstab, this will automatically happen upon reboot and only has to be done this one time.

Thats it! Your drive is now ready to be filled all the way up with data at /store/

A Beginner’s Guide To LVM

1 Preliminary Note

This tutorial was inspired by two articles I read:
http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html
http://www.debian-administration.org/articles/410
These are great articles, but hard to understand if you’ve never worked with LVM before. That’s why I have created this Debian Etch VMware image that you can download and run in VMware Server or VMware Player (see http://www.howtoforge.com/import_vmware_images to learn how to do that).
I installed all tools we need during the course of this guide on the Debian Etch system (by running
apt-get install lvm2 dmsetup mdadm reiserfsprogs xfsprogs
2 LVM Layout

Basically LVM looks like this:

You have one or more physical volumes (/dev/sdb1 – /dev/sde1 in our example), and on these physical volumes you create one or more volume groups (e.g. fileserver), and in each volume group you can create one or more logical volumes. If you use multiple physical volumes, each logical volume can be bigger than one of the underlying physical volumes (but of course the sum of the logical volumes cannot exceed the total space offered by the physical volumes).
It is a good practice to not allocate the full space to logical volumes, but leave some space unused. That way you can enlarge one or more logical volumes later on if you feel the need for it.
In this example we will create a volume group called fileserver, and we will also create the logical volumes /dev/fileserver/share, /dev/fileserver/backup, and /dev/fileserver/media (which will use only half of the space offered by our physical volumes for now – that way we can switch to RAID1 later on (also described in this tutorial)).

3 Our First LVM Setup

Let’s find out about our hard disks:
fdisk -l

The output looks like this:
server1:~# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 2450 19535040 83 Linux
/dev/sda4 2451 2610 1285200 82 Linux swap / Solaris

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn’t contain a valid partition table
There are no partitions yet on /dev/sdb – /dev/sdf. We will create the partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 and leave /dev/sdf untouched for now. We act as if our hard disks had only 25GB of space instead of 80GB for now, therefore we assign 25GB to /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1:
fdisk /dev/sdb

server1:~# fdisk /dev/sdb

Command (m for help): <– n
Command action
e extended
p primary partition (1-4)
<– p
Partition number (1-4): <– 1
First cylinder (1-10443, default 1): <– <ENTER>
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-10443, default 10443): <– +25000M

Command (m for help): <– t
Selected partition 1
Hex code (type L to list codes): <– L
Hex code (type L to list codes): <– 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): <– w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

fdisk -l

again. The output should look like this:
server1:~# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 2450 19535040 83 Linux
/dev/sda4 2451 2610 1285200 82 Linux swap / Solaris

Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 3040 24418768+ 8e Linux LVM

 

Now we prepare our new partitions for LVM:
pvcreate /dev/sdb1

server1:~# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Physical volume “/dev/sdb1” successfully created

Let’s revert this last action for training purposes:
pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

server1:~# pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Labels on physical volume “/dev/sdb1” successfully wiped

Then run
pvcreate /dev/sdb1
server1:~# pvcreate /dev/sdb1
Physical volume “/dev/sdb1” successfully created

Now run
pvdisplay

to learn about the current state of your physical volumes:
server1:~# pvdisplay
— NEW Physical volume —
PV Name /dev/sdb1
VG Name
PV Size 23.29 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID G8lu2L-Hij1-NVde-sOKc-OoVI-fadg-Jd1vyU

Now let’s create our volume group fileserver and add /dev/sdb1 – /dev/sde1 to it:
vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

server1:~# vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Volume group “fileserver” successfully created
Let’s learn about our volume groups:
vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name fileserver
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 0 / 0
Free PE / Size 23844 / 93.14 GB
VG UUID 3Y1WVF-BLET-QkKs-Qnrs-SZxI-wrNO-dTqhFP
Another command to learn about our volume groups:
vgscan

server1:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “fileserver” using metadata type lvm2
For training purposes let’s rename our volumegroup fileserver into data:
vgrename fileserver data

server1:~# vgrename fileserver data
Volume group “fileserver” successfully renamed to “data”
Let’s run vgdisplay and vgscan again to see if the volume group has been renamed:
vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name data
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 0 / 0
Free PE / Size 23844 / 93.14 GB
VG UUID 3Y1WVF-BLET-QkKs-Qnrs-SZxI-wrNO-dTqhFP
vgscan

server1:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “data” using metadata type lvm2
Now let’s delete our volume group data:
vgremove data

server1:~# vgremove data
Volume group “data” successfully removed
vgdisplay

No output this time:
server1:~# vgdisplay
vgscan

server1:~# vgscan
Reading all physical volumes. This may take a while…
Let’s create our volume group fileserver again:
vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

server1:~# vgcreate fileserver /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Volume group “fileserver” successfully created
Next we create our logical volumes share (40GB), backup (5GB), and media (1GB) in the volume group fileserver. Together they use a little less than 50% of the available space (that way we can make use of RAID1 later on):
lvcreate –name share –size 40G fileserver

server1:~# lvcreate –name share –size 40G fileserver
Logical volume “share” created
lvcreate –name backup –size 5G fileserver

server1:~# lvcreate –name backup –size 5G fileserver
Logical volume “backup” created
lvcreate –name media –size 1G fileserver

server1:~# lvcreate –name media –size 1G fileserver
Logical volume “media” created
Let’s get an overview of our logical volumes:
lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID 280Mup-H9aa-sn0S-AXH3-04cP-V6p9-lfoGgJ
LV Write Access read/write
LV Status available
# open 0
LV Size 40.00 GB
Current LE 10240
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/fileserver/backup
VG Name fileserver
LV UUID zZeuKg-Dazh-aZMC-Aa99-KUSt-J6ET-KRe0cD
LV Write Access read/write
LV Status available
# open 0
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

— Logical volume —
LV Name /dev/fileserver/media
VG Name fileserver
LV UUID usfvrv-BC92-3pFH-2NW0-2N3e-6ERQ-4Sj7YS
LV Write Access read/write
LV Status available
# open 0
LV Size 1.00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
lvscan

server1:~# lvscan
ACTIVE ‘/dev/fileserver/share’ [40.00 GB] inherit
ACTIVE ‘/dev/fileserver/backup’ [5.00 GB] inherit
ACTIVE ‘/dev/fileserver/media’ [1.00 GB] inherit
For training purposes we rename our logical volume media into films:
lvrename fileserver media films
server1:~# lvrename fileserver media films
Renamed “media” to “films” in volume group “fileserver”
lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID 280Mup-H9aa-sn0S-AXH3-04cP-V6p9-lfoGgJ
LV Write Access read/write
LV Status available
# open 0
LV Size 40.00 GB
Current LE 10240
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/fileserver/backup
VG Name fileserver
LV UUID zZeuKg-Dazh-aZMC-Aa99-KUSt-J6ET-KRe0cD
LV Write Access read/write
LV Status available
# open 0
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

— Logical volume —
LV Name /dev/fileserver/films
VG Name fileserver
LV UUID usfvrv-BC92-3pFH-2NW0-2N3e-6ERQ-4Sj7YS
LV Write Access read/write
LV Status available
# open 0
LV Size 1.00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
lvscan

server1:~# lvscan
ACTIVE ‘/dev/fileserver/share’ [40.00 GB] inherit
ACTIVE ‘/dev/fileserver/backup’ [5.00 GB] inherit
ACTIVE ‘/dev/fileserver/films’ [1.00 GB] inherit
Next let’s delete the logical volume films:
lvremove /dev/fileserver/films

server1:~# lvremove /dev/fileserver/films
Do you really want to remove active logical volume “films”? [y/n]: <– y
Logical volume “films” successfully removed
We create the logical volume media again:
lvcreate –name media –size 1G fileserver

server1:~# lvcreate –name media –size 1G fileserver
Logical volume “media” created
Now let’s enlarge media from 1GB to 1.5GB:
lvextend -L1.5G /dev/fileserver/media

server1:~# lvextend -L1.5G /dev/fileserver/media
Extending logical volume media to 1.50 GB
Logical volume media successfully resized
Let’s shrink it to 1GB again:
lvreduce -L1G /dev/fileserver/media

server1:~# lvreduce -L1G /dev/fileserver/media
WARNING: Reducing active logical volume to 1.00 GB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce media? [y/n]: <– y
Reducing logical volume media to 1.00 GB
Logical volume media successfully resized

Until now we have three logical volumes, but we don’t have any filesystems in them, and without a filesystem we can’t save anything in them. Therefore we create an ext3 filesystem in share, an xfs filesystem in backup, and a reiserfs filesystem in media:
mkfs.ext3 /dev/fileserver/share

server1:~# mkfs.ext3 /dev/fileserver/share
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
5242880 inodes, 10485760 blocks
524288 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
320 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
mkfs.xfs /dev/fileserver/backup

server1:~# mkfs.xfs /dev/fileserver/backup
meta-data=/dev/fileserver/backup isize=256 agcount=8, agsize=163840 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=2560, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
mkfs.reiserfs /dev/fileserver/media

server1:~# mkfs.reiserfs /dev/fileserver/media
mkfs.reiserfs 3.6.19 (2003 http://www.namesys.com)

A pair of credits:
Alexander Lyamin keeps our hardware running, and was very generous to our
project in many little ways.

Chris Mason wrote the journaling code for V3, which was enormously more useful
to users than just waiting until we could create a wandering log filesystem as
Hans would have unwisely done without him.
Jeff Mahoney optimized the bitmap scanning code for V3, and performed the big
endian cleanups.
Guessing about desired format.. Kernel 2.6.17-2-486 is running.
Format 3.6 with standard journal
Count of blocks on the device: 262144
Number of blocks consumed by mkreiserfs formatting process: 8219
Blocksize: 4096
Hash function used to sort names: “r5”
Journal Size 8193 blocks (first block 18)
Journal Max transaction length 1024
inode generation number: 0
UUID: 2bebf750-6e05-47b2-99b6-916fa7ea5398
ATTENTION: YOU SHOULD REBOOT AFTER FDISK!
ALL DATA WILL BE LOST ON ‘/dev/fileserver/media’!
Continue (y/n):y
Initializing journal – 0%….20%….40%….60%….80%….100%
Syncing..ok

Tell your friends to use a kernel based on 2.4.18 or later, and especially not a
kernel based on 2.4.9, when you use reiserFS. Have fun.

ReiserFS is successfully created on /dev/fileserver/media.
Now we are ready to mount our logical volumes. I want to mount share in /var/share, backup in /var/backup, and media in /var/media, therefore we must create these directories first:
mkdir /var/media /var/backup /var/share

Now we can mount our logical volumes:
mount /dev/fileserver/share /var/share
mount /dev/fileserver/backup /var/backup
mount /dev/fileserver/media /var/media

Now run
df -h

You should see your logical volumes in the output:
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.0G 33M 992M 4% /var/media

Congratulations, you’ve just set up your first LVM system! You can now write to and read from /var/share, /var/backup, and /var/media as usual.
We have mounted our logical volumes manually, but of course we’d like to have them mounted automatically when the system boots. Therefore we modify /etc/fstab:
mv /etc/fstab /etc/fstab_orig
cat /dev/null > /etc/fstab

vi /etc/fstab

Put the following into it:
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/sda2 / ext3 defaults,errors=remount-ro 0 1
/dev/sda1 /boot ext3 defaults 0 2
/dev/hdc /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0
/dev/fileserver/share /var/share ext3 rw,noatime 0 0
/dev/fileserver/backup /var/backup xfs rw,noatime 0 0
/dev/fileserver/media /var/media reiserfs rw,noatime 0 0
If you compare it to our backup of the original file, /etc/fstab_orig, you will notice that we added the lines:
/dev/fileserver/share /var/share ext3 rw,noatime 0 0
/dev/fileserver/backup /var/backup xfs rw,noatime 0 0
/dev/fileserver/media /var/media reiserfs rw,noatime 0 0
Now we reboot the system:
shutdown -r now

After the system has come up again, run
df -h

again. It should still show our logical volumes in the output:
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.0G 33M 992M 4% /var/media

 

Editing fstab to automount partitions at startup

auto mounting partitions is very easy in linuxmint with the disk utility which have a nice gui explaining everyting.

but now i am going to show you a staright forward process of automonting partitions by editing /etc/fstabfile.

this tutorial is not solely for automounting but how to edit fstab efficiently and gaining some knowledge about it.

steps:

1. sudo gedit /etc/fstab

2. now the fstab file is open in gedit. you need to add an entry for the partition to automount it at startup.

the format of a new entry is like this:

file_system   mount_point   type  options     dump  pass

you will see this in the file and you need to add your new entry under this line.

brief explanation of the above format:

1.file_system = your device id.

use this:

/dev/sdax ( you should check it with sudo fdisk -l)

it may be /dev/sdbx or /dev/sdcx if you have more than one disks connected.

2. mount_point =where you want to mount your partition.

use this:

/media/user/label  

here user is your user name, label is “software”, “movies” or whatever label your partiton have.

3. type=fat32,ntfs, ntfs-3g,ext2,ext4 or whatever your partition type is.

4. options =mount options for the partition(explained later).

5. dump=Enable or disable backing up of the device/partition .usually set to 0, which disables it.

6. pass =Controls the order in which fsck checks the device/partition for errors at boot time. The root device should be 1. Other partitions should be 2, or 0 to disable checking.

so for auto mounting case the above format reduces to:

/dev/sdax /media/user/label  type  options           0  0

(you can check the type with sudo fdisk -l)

the options field:

  • sync/async – All I/O to the file system should be done synchronously/asynchronously.
  • auto/noauto – The filesystem will be mounted automatically at startup/The filesystem will NOT be automatically mounted at startup.
  • dev/nodev – Interpret/Do not interpret character or block special devices on the file system.
  • exec / noexec – Permit/Prevent the execution of binaries from the filesystem.
  • suid/nosuid – Permit/Block the operation of suid, and sgid bits.
  • ro/rw – Mount read-only/Mount read-write.
  • user/nouser – Permit any user to mount the filesystem. (This automatically implies noexec, nosuid,nodev unless overridden) / Only permit root to mount the filesystem. This is also a default setting.
  • defaults – Use default settings. Equivalent to rw, suid, dev, exec, auto, nouser, async.
  • _netdev – this is a network device, mount it after bringing up the network. Only valid with fstype nfs.

now the final format reduces to (for auto mount):

/dev/sdax /media/user/label  type     defaults       0  0  

for ntfs

/dev/sdax /media/user/label   ntfs  defaults       0  0  

for ext4

/dev/sdax /media/user/label   ext4  defaults       0  0  

etc…..

you can change defaults by your own configuration, like

/dev/sdax /media/user/label   ext4  rw,suid,dev,noexec,auto,user,async      0  0

etc…

you need to add entry for each partiton you want to auto mount.