Re: [vserver] vserver quota support

About this list Date view Thread view Subject view Author view Attachment view

From: Herbert Poetzl (
Date: Thu Jul 11 2002 - 10:04:16 EDT

On Thu, Jul 11, 2002 at 05:44:08AM -0500, Jacques Gelinas wrote:
> On Wed, 10 Jul 2002 20:45:57 -0500, Herbert Poetzl wrote
> > I'm don't know (like with the binding-to-every-IP option) whether it is
> > > really practical in the tradeoff for time spent implementing it.
> > >
> > > Somebody will probably say something if this is /not/ the case.
> >
> > I read the documentation, suggesting to
> > combine context and 16bit user id to 32bit uid
> > to make quota possible ...
> >
> > After some DEEP look into the kernel sources,
> > I think (please correct me if I am wrong) that
> > it should be possible to add a third kind of
> > quota (context quota) which limits the space
> > available to an entire context ...
> >
> > my suggestion is to make the following uid
> > mapping in/near the virtual filesystem layer:
> >
> > Process [context/16,uid/16] <--> Filesystem [uid/32]
> I have tought about this solution as well and came with the
> following ideas
> context/16,0 would represent a special case and would
> maintain the sum for all files in a given context.

yes, but, what I meant was to do the mapping
at the filesystem layer (vfs layer) and to hide
this mapping entirely from the process layer ...

what you would get is the same behaviour as before
except for the limitation to 16bit (or whatever
number you like between 1 and 32 ;) for the uid
and the rest for the context ...

then I add a third context counter (that seems
easy to me, less than 10 lines which need to be
patched) context.quota which handles the entire
context ...

this way you can have quota for the root user too

a nice sideeffect (at least in my opinion) would be
that the "shared" files (read hardlinked) would
not be accounted for the quota, only the files
produced within the context ...

> context/16,uid/16 would do what you propose
> The same would be done for group, with no special case.

could be applied for groups as well as for uid
(I didn't think about that one before ...)

> To make this working, you will need to assign context number
> by hand to vservers (which is supported). I was thinking of
> changing the way security context would be assigned.

yeah, I didn't mention it, because I already use fixed
context numbers, because I found it irritating to
get a random number for any virtual server ...

> Currently 0 is the original (when you boot), 1 is the magic context
> used to see everything and then a free context is allocated from
> 2 and up.
> I would change this to 1000. This means that the first 998
> security context would only be usable when explicitly requested.
> Any vservers not setting S_CONTEXT would end up using 1000 and up.

hmm ... maybe, but I would not make this number fixed
because if I think about the variable number of
bits (1..32) to finetune your contexts/user/group ratio
it would be nice not to restrain this decision ...
(well at the first shot it will definitely be a kernel
compile time decision ...)

> > I am willing to try the required modifications
> > next week (or the week after) and would be happy
> > to get some suggestions/ideas from you ...
> Give it a shot and get back ?
> If you are an idea to let root in a vserver setup the
> quota itself for its uid
> range, welcome also :-)

I had some ideas to make the virtual server more
configurable by the maintainer of the virtual server
(the vserver root user)...

one idea how this can be done for quota would be
to write a file containing the quota limits for
all virtual users inside the vserver and to
join this information, adapting the uid/gid to the
"real" uid/gid from outside, where the actual
quota limits are set on regular basis or on demand.
(like the rebootmgr)

of course you have make additional security checks
to keep the system safe/secure ... (i.e. uid/gid
range, root permissions etc.)


About this list Date view Thread view Subject view Author view Attachment view

This archive was generated by hypermail 2.1.4 : Mon Aug 19 2002 - 12:01:01 EDT