[olug] IO Performance

George Neill georgen at neillnet.com
Thu Sep 15 23:45:48 UTC 2011


Jay,

There are additional tunables in the /sys/block/$dev/queue/iosched/*
directory, each scheduler has different ones!

Depending on the access patterns of your machine, you might look at the
iosched/writes_starved and the iosched/*_expire tunables for the deadline
scheduler.

Can you elaborate more on what kinds of disk access patterns are on the
machine you are tuning?

With large sequential reads/writes, I have had the best luck with
anticipatory and deadline schedulers ... CFQ seems to slow/hang the system
under multi-user/heavy loads.  If you are dealing with lots of small files,
I didn't notice much difference between any of the io scheduler settings,
but in this situation I'd definitely recommend setting read-ahead cache to
0.  If you have a decent HW RAID card, I'd test setting the scheduler to
noop and let the HW deal with the management of the IO requests.

I have seen over 1G/s read and write using using CentOS 5.5, DELL PE510 with
12 SAS drives/H700 PERC in a RAID5 config,  by just changing the io
scheduler to noop and adjusting the read-ahead cache.

They have done some neat stuff with ext4, very noticeable performance gains
when writing, see the multiblock allocation section from the link below.

http://kernelnewbies.org/Ext4#head-b2148d2a96d22a1bd7e376e6c08e4a38d08fb157

Thanks,
George




On Thu, Sep 15, 2011 at 10:25 AM, jay swackhamer <reboottheuser at gmail.com>wrote:

> I know there are, but I was looking for some specifics, if anyone had an
> experience tuning, and what, if anything had
> a noticable impact.
>
> On Thu, Sep 15, 2011 at 10:09 AM, George Neill <georgen at neillnet.com>
> wrote:
>
> > Jay,
> >
> > The file system you choose and the way you disk is laid out can have a
> > huge impact as well.
> >
> > There are many file system tunables you cam mess around with.
> >
> > Later
> > George
> >
> > On 9/15/11, jay swackhamer <reboottheuser at gmail.com> wrote:
> > > It appears that a code-level on the IBM XIV disk array that relates to
> > > replication has more to do with IO performance than Linux Tuning.
> > >
> > > Now we are running batch schedules through Oracle faster than
> previously,
> > so
> > > there was some benefit, and now I can tweak/tune these some more,
> > > to get to the best numbers for the workload.
> > >
> > > On the road to this discovery, I found these parameters related to IO
> > > performance.
> > >
> > > Does anyone else have any IO related tuning tips?
> > >
> > > echo 10 > /proc/sys/vm/swappiness
> > > echo 1024 > /proc/sys/vm/min_free_kbytes
> > > sysctl -w vm.pagecache="1 10 30"
> > > echo 192 > /sys/module/qla2xxx/parameters/ql2xmaxqdepth
> > > multipath -ll > /tmp/multipath.txt
> > > for i in `grep sd /tmp/multipath.txt | awk ' { print $3 } '`
> > > do
> > >     echo 192 > /sys/block/$i/device/queue_depth
> > >     echo 384 > /sys/block/$i/queue/nr_requests
> > >     echo 512 > /sys/block/$i/queue/read_ahead_kb
> > >     echo deadline > /sys/block/$i/queue/scheduler
> > > done
> > >
> > > for i in `mount | grep vg | awk ' { print $1 } '`
> > > do
> > >     mount -o remount,noatime,async $i
> > > done
> > > _______________________________________________
> > > OLUG mailing list
> > > OLUG at olug.org
> > > https://lists.olug.org/mailman/listinfo/olug
> > >
> > _______________________________________________
> > OLUG mailing list
> > OLUG at olug.org
> > https://lists.olug.org/mailman/listinfo/olug
> >
> _______________________________________________
> OLUG mailing list
> OLUG at olug.org
> https://lists.olug.org/mailman/listinfo/olug
>



More information about the OLUG mailing list