[SUMMARY-II] VxVM Volume/UFS overhead

From: Levi Ashcol <leviashcol_at_hotpop.com>
Date: Thu Oct 28 2004 - 16:13:23 EDT
Another note from Darren Dunham:

Traditionally, the default was 10%.  However in recent versions of
Solaris the default was changed to be smaller on bigger volumes.

>From the Solaris 8 'newfs' man page

                 The default is  ((64  Mbytes/partition  size)  *
                 100),  rounded  down  to the nearest integer and
                 limited between 1% and 10%, inclusively.

So anything over 6.4 GB on Solaris 8 should already be using a minfree
of 1% (unless overridden).

Thanks.

Levi


-----Original Message-----
From: Levi Ashcol [mailto:leviashcol@hotpop.com] 
Sent: Tuesday, October 26, 2004 10:31 PM
To: 'sunmanagers@sunmanagers.org'
Subject: [SUMMARY] VxVM Volume/UFS overhead 


The original post is below.

Thanks to:
 Darren Dunham 
 Kevin Johnson
 francisco [frisco@blackant.net]
 Doug Granzow 
 Nathan Dietsch
 Miller Alan 

Summary:

1- In general, most of the Volume Manager overhead comes from
the allocation of the private region, and from rounding effects due to
keeping volumes aligned on cylinder boundaries.  Loss of space within a
volume tends to be an insignificant component.

2- The UFS overhead is controlled with a factor called minfree and its
default value is 10% of the filesystem size.

3- The minfree space is not wasted, but it reserved for the root user.
Any process running as root can continue to write to disk until all disk
space is used up.  This is to protect the system from accidental or
intentional excessive disk space consumption from non-root users.

4- Controlling the value of minfree is done when creating a filesystem
with newfs -m or dynamically after creation of the filsystem using
tunefs -m 1 /filesystem (where 1 is the required percentage).


5-To know the current minfree value (percentage) of a filesystem:
  fstyp -v /dev/vx/rdsk/datadg/apps_bin | grep minfree.

6- Decrease the minfree to 1% has nothing to do with performance as
wrongly indicated in the tunefs man page see Sun infodoc: 15769

Thanks

Levi

-----Original Message-----
From: sunmanagers-bounces@sunmanagers.org
[mailto:sunmanagers-bounces@sunmanagers.org] On Behalf Of Levi Ashcol
Sent: Monday, October 25, 2004 5:44 PM
To: sunmanagers@sunmanagers.org
Subject: VxVM Volume/UFS overhead 


Hi Gurus,
 I have a SF280 Server connected to external EMC Storage. (Solaris
8-108528-27) (Using VxVM  VERSION:  3.2,REV=08 and UFS)

I noticed a massive waste of disk space when creating a new concatenated
volume and a filesystem on it.

Look at the following:
#df -k 
Filesystem                         kbytes    used   avail  capacity
Mounted on
   /dev/vx/dsk/datadg/apps_bin      294910784 92205081 173214625    35%
/apps/bin
   /dev/vx/dsk/datadg/apps_data     39321424       9 35389273     1%
/apps/data


The first volume is around 294GB in size, however if you add the
used/avail it will gives you 
Around 265 GB. i.e: a waste in space about 30 GB !!!
For the second volume there is a waste in space around 4GB !!!.

My questions:
  - Where did this spaces gone, Is this a normal behavior ? Is it logic
for VxVM to eat 30 GB for VxVM operations ?!
  - What is the percentage of filesystem overhead from the total
filesystem size ?
  - Are there any overheads imposed from the Volume manager and what is
the percentage of this overhead ?
  - Any mount option or kernel parameter to correct this ?


Here are the c/c's of the volumes:
# vxprint -htA | grep apps_bin
v  apps_bin        -            ENABLED  ACTIVE   629145600 SELECT   -
fsgen
pl apps_bin-01     apps_bin        ENABLED  ACTIVE   629145600 CONCAT
-        RW
sd datadg03-01  apps_bin-01     datadg03 0        214141440 0
c6t5d208 ENA
sd datadg04-01  apps_bin-01     datadg04 0        214141440 214141440
c6t5d209 ENA
sd datadg05-01  apps_bin-01     datadg05 0        200862720 428282880
c6t5d210 ENA

# vxprint -htA | grep apps_data
v  apps_data    -            ENABLED  ACTIVE   83886080 SELECT    -
fsgen
pl apps_data-01 apps_data    ENABLED  ACTIVE   83888640 CONCAT    -
RW
sd datadg01-02 apps_data-01 raid_dg01 3148800 83888640 0
c6t5d212 ENA


I will Summarize definitely !

Thanks

Levi
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Thu Oct 28 16:16:36 2004

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:39 EST