VS Secure Quota HowTo
How to use Secure Quota Support for VS
Quota on a separate partition (LVM, vroot, Conv. Quota)
This is an example for setting up quota support for a VServer based
system. It tries to explain how to setup user/group quota on a separate
partition. If you want to use context quota support on a shared partition,
have a look at the
Context Quota HowTo.
The following walkthrough was done on a Mandrake 8.2 Linux system,
running a custom built 2.4.20rc1 kernel, patched with CTX-14 patches
from the 0.21 release of VServer, and the vroot01 kernel patch from the
vquota-tools-0.1 package. Packages installed: 0.21 vserver base/admin,
quota-tools 3.07 and the vquota-tools 0.1 vrsetup.
In this example, /dev/vg is the LVM volume group providing the
separate partition /dev/vg/part1 for the to-be-moved virtual server
TE01, and /dev/vroot/0 is the virtual root device used.
The last section provides
links to all required patches and tools.
-
You will require at least the following to use/enable quota
support on your system:
- quota support for the filesystem you want to use.
- a patched kernel (quota, vserver, vroot, lvm?)
- the quota-tools compiled for this kernel.
- the vquota-tools compiled for this kernel.
- a free partition, the size for one virtual server.
- a good idea what quota is, and how to use the tools.
-
You have to patch the kernel to support the CAP_QUOTACTL
capability and provide the vroot (fake) device. Further you
will need the CAP_QUOTACTL aware vserver tools, and the
vrsetup tool to configure the vroot device.
-
This step assumes that you want to move your existing virtual
server TE01 on a separate partition /dev/vg/part1, and enable
quota support for that partition. If you either do not have an
existing server or already moved/created one on a separate
partition, you have to adjust and/or leave out the appropriate
steps.
- stop the virtual server
# vserver TE01 stop
- create filesystem and mountpoint
# mke2fs -j /dev/vg/part1
# mkdir /vservers/LV01
# mount /dev/vg/part1 /vservers/LV01
- copy the virtual server and the configuration
# cp -a /vservers/TE01/. /vservers/LV01/
# cp -a /etc/vservers/TE01.conf /etc/vservers/LV01.conf
- create/modify the start/stop scripts
----------------- /etc/vservers/LV01.sh -----------------
#!/bin/sh
case $1 in
pre-start)
e2fsck -p /dev/vg/part1
mount /dev/vg/part1 /vservers/LV01
rm -f /vservers/LV01/dev/hdv1
vrsetup /dev/vroot/0 dev/vg/part1
cp -fa /dev/vroot/0 /vservers/LV01/dev/hdv1
;;
post-start)
;;
pre-stop)
;;
post-stop)
mount -o remount,ro /vservers/LV01
umount /vservers/LV01
vrsetup -d /dev/vroot/0
;;
*)
echo $0 pre-start
echo $0 pre-stop
echo $0 post-start
echo $0 post-stop
;;
esac
----------------- /etc/vservers/LV01.sh -----------------
- change the server mtab file
/dev/hdv1 / ext3 rw,usrquota,grpquota 0 0
- if not already done, install the quota-tools for the virtual server
# vrpm LV01 -- -i quota-3.07-1.i586.rpm
- start the server and change into it
# vserver LV01 start
# vserver LV01 enter
- run the quotacheck tool
# quotacheck -maug
- take a look at the quota report
# repquota -aug
*** Report for user quotas on device /dev/hdv1
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------
root -- 249712 0 0 12146 0 0
rpm -- 13124 0 0 71 0 0
apache -- 980 0 0 235 0 0
rpcuser -- 4 0 0 1 0 0
*** Report for group quotas on device /dev/hdv1
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
Group used soft hard grace used soft hard grace
----------------------------------------------------------------
root -- 248964 0 0 12102 0 0
daemon -- 4 0 0 1 0 0
tty -- 16 0 0 2 0 0
...
- use edquota to set and quota to report quota status
- live happily ever after using quota wisely ...
-
This approach has several advantages and a few drawbacks.
Advantages would/could be:
- fixed maximum hard limit for each virtual server.
- no security/access issues with files or partitions.
- all quota settings can be done within the server.
Disadvantages would/could be:
- no unification across servers is possible.
- changing the maximum size requires fs/partition resize.
- filesystem caching is done for each partition.
-
|