Saturday, October 07, 2006

7.3 - Clearing the console each time a user logs out.

To do this you must add a line in /etc/gettytab(5). Change the current section:
P|Pc|Pc console: :np:sp#9600:

adding the line ":cl=\E[H\E[2J:" at the end, so that it ends up looking like this:
P|Pc|Pc console: :np:sp#9600: :cl=\E[H\E[2J:

Monday, September 18, 2006

What are the Q1 and Q2 parameters?

In the source code, these are called magic1 and magic2. These numbers refer to the number of oustanding requests on a message queue. They are specified on the cache_dir option line, after the L1 and L2 directories:

cache_dir diskd /cache1 1024 16 256 Q1=72 Q2=64

If there are more than Q1 messages outstanding, then Squid will intentionally fail to open disk files for reading and writing. This is a load-shedding mechanism. If your cache gets really really busy and the disks can not keep up, Squid bypasses the disks until the load goes down again.

If there are more than Q2 messages outstanding, then the main Squid process ``blocks'' for a little bit until the diskd process services some of the messages and sends back some replies.

Reasonable Q1 and Q2 values are 64 and 72. If you would rather have good hit ratio and bad response time, set Q1 > Q2. Otherwise, if you would rather have good response time and bad hit ratio, set Q1 < Q2.

Friday, September 01, 2006

comm_accept: FD 10: (53) Software caused connection abort

> 2004/01/06 18:44:59| httpAccept: FD 10: accept failure: (53) Software caused connection abort
> 2004/01/06 18:46:25| comm_accept: FD 10: (53) Software caused connection abort
> 2004/01/06 18:46:25| httpAccept: FD 10: accept failure: (53) Software caused connection abort

These are considered harmless unless bad effects are reported by the
clients.

The assumption is that these are given in certain conditions when the
browser aborts the connection before sending the request.

Regards
Henrik

detaile read this :
http://www.squid-cache.org/mail-archive/squid-users/200401/0239.html

Friday, August 25, 2006

Queueing Disciplines

A queueing discipline controls outgoing traffic by packet
scheduling and/or queue buffer management.
>>>>(yes, it controls only outgoing traffic.)

PF HFSC part II

# hfsc part 2 #
###############
Monday, February 16, 2004, 6:43:48 PM, Kenjiro Cho wrote:

>you can do the following:
>queue Q1 bandwidth 0% hfsc (realtime 32Kb, linkshare 0%)
>bandwidth for Q1 invalid (0 / 0)
>./pf.conf:xx: errors in queue definition
>oops! the bandwidth check is a bit too strict.
>the patch at the end of this mail should fix it.
yeah...

>but, again, this isn't the right solution for you since you don't need
>any realtime guarantee.
why not? it is just a way to specify initial bandwidth for queue. no?

>setting both realtime and linkshare to 0 means that 0bps is assined to
>the queue. then, the queue will get nothing.
i don't want to realtime==0 and linkshare==0 :) the goal is to construct some queues what have no ability to receive excess bandwidth (linkshare==0).

for example :
altq on fxp0 bandwidth 256Kb hfsc queue {Q1,Q2,Q3}
# guarantee and don't try to borrow for Q1 and Q2
queue Q1 hfsc (realtime 32Kb linkshare 0%)
queue Q2 hfsc (realtime 32Kb linkshare 0%)

# guarantee, borrow the rest, plus some from Q1 and Q2 if avail
queue Q3 hfsc (realtime 64Kb linkshare 100% default)

#equivalent to cbq's
altq on fxp0 bandwidth 256Kb cbq queue {Q1,Q2,Q3}
queue Q1 bandwidth 32Kb cbq
queue Q2 bandwidth 32KB cbq
queue Q3 bandwidth 64Kb cbq (borrow default)

#but as far as i understood from our discussion, the
altq on fxp0 bandwidth 256Kb hfsc queue {Q1,Q2,Q3}
queue Q1 hfsc (linkshare 33% upperlimit 32Kb)
queue Q2 hfsc (linkshare 33% upperlimit 32Kb)
queue Q3 hfsc (linkshare 33% upperlimit 64Kb default)

will do almost the same, but not the same: first will guarantee
Q1 and Q2 their bandwidth, second - will not, but will in case of
appropriate handcrafted value for Q3's upperlimit or when Q3 is not
saturated enough. additionally, you need to handcraft linkshare's
values. complete dizzy :)

so the first example easier to construct, but the bandwidth and
linkshare misunderstanding exist (i will try your patch).

btw,
altq on fxp0 bandwidth 256Kb hfsc queue {Q1,Q2,Q3}
# guarantee and don't try to borrow for Q1 and Q2
queue Q1 hfsc (realtime 32Kb linkshare 50%)
queue Q2 hfsc (realtime 32Kb linkshare 50%)
# guarantee, borrow the rest (including from Q1 and Q2)
queue Q3 hfsc (realtime 64Kb linkshare 100% default)

parses normally, but 100+50+50 > 100 :) this is cloudy, how
scheduler deals with this...

>as for me bandwidth is bogus with hfsc...
>do you have a better idea?

make linkshare primary, not bandwidth (even in linkshare=0 case). this
is logical since original altq have no bandwidth. why
>/* if link_share is not specified, use bandwidth */
>if (opts->>lssc_m2 == 0)
>opts->>lssc_m2 = pa->bandwidth;

is needed? just assume opts->>lssc_m2 == 0 if opts->>lssc_m2 == 0
when bandwidth not specified. opts->>lssc_m2 = pa->bandwidth must
be used if only bandwidth specified.

it will not break bandwidth-aware setups, but will give more power
to non-bandwith...

PF HFSC part I

## Notes on HFSC #
##################
* real-time: guarantee service curves of all leaf classes
* link-sharing: guarantee service curves of interior classes and distribute excess service fairly
* whenever there is a potential of conflict, real-time criteria is used

if (there is an eligible packet)
/* real-time criteria */
send eligible packet with min. deadline d ;
else
/* link-sharing criteria */
send packet with min. virtual time v ;

e - eligible time \ d - deadline \ v - virtual time
#referances that helped me (and the above quotes came from):
http://www-2.cs.cmu.edu/~hzhang/HFSC/TALK/sld038.htm (keep reading till sld045.htm)
http://www-2.cs.cmu.edu/~hzhang/HFSC/intro.html

##Notes on HFSC in PF:
* realtime values cannot add up to be higher than 75% total interface bandwidth, and are calculated for the entire interface instead of parent queues. linkshare values max at 100% as per normal and are calculated within their parent queues
* qlimit is the limit of packets that are waiting in-queue to be sent. if the qlimit is reached an additional packets are dropped. this is in an effort to keep the traffic flowing smoothly instead of backing up causing delays. I assign high queue limits to traffic which i want to be reliable if slow. I assign low qlimits to ensure smooth flow at expense of dropped packets.
* bandwidth (other than on the altq on line itself) is just a fallback value for linkshare. it should not even be neccesary to specify it on newer versions of pf.

#Outgoing Queues on External Bridge Interface (from local to internet) Traffic Conditioning/Shaping

#768kbit upload, using hfsc (78kB/s 624kbit/s max via ftp, pftop shows 85kB/s or 680kbit/s)
altq on $ext hfsc bandwidth 768Kb qlimit 150 queue {root_out }
#only give out 95% of the bandwidth, this helps improve overall queue handling, esp latency
queue root_out bandwidth 95% hfsc(linkshare 89% upperlimit 89{oCtl,oAck,oDly,oBrst,\ oRel, oTput}

#icmp control traffic
queue oCtl bandwidth 16Kb qlimit 15 hfsc( realtime 16Kb linkshare 16Kb )

#tcp ack traffic
queue oAck bandwidth 32Kb qlimit 150 hfsc( realtime 64Kb linkshare 32Kb red )

#latency sens traffic
queue oDly bandwidth 160Kb qlimit 50 hfsc( realtime 164Kb linkshare 160Kb )

#burst-prone traffic
queue oBrst bandwidth 84Kb qlimit 50 hfsc( realtime (256Kb 6000 94Kb) linkshare(2Kb 6000 84Kb))

#reliable traffic
queue oRel bandwidth 128Kb qlimit 100 hfsc(linkshare 128Kb ) { oRelTCP, oRelUDP }
queue oRelTCP bandwidth 64Kb qlimit 50 hfsc (linkshare 64Kb default red )
queue oRelUDP bandwidth 64Kb qlimit 50 hfsc(linkshare 64Kb)

#throughput-oriented traffic
queue oTput bandwidth 100Kb qlimit 25 hfsc(linkshare 100Kb red )

### Outgoing Queues on Internal Bridge Interface (from internet to local) Traffic Conditioning/Shaping
altq on $int hfsc bandwidth 1240Kb queue {root_in }
queue root_in hfsc(linkshare 95%) {iCtl, iAck, iDly, iBrst, iRel,iTput}
queue iCtl bandwidth 3% qlimit 15 hfsc( realtime 16Kb linkshare 16Kb)
queue iAck bandwidth 20% qlimit 1000 hfsc( realtime 64Kb linkshare 64Kb red)
queue iDly bandwidth 20% qlimit 25 hfsc( realtime 128Kb linkshare 128Kb)
queue iBrst bandwidth 20% qlimit 25 hfsc( realtime (256Kb 8000 128Kb)linkshare(512Kb 8000 128Kb)
queue iRel bandwidth 20% qlimit 150 hfsc( realtime 512Kb linkshare 256Kb default)
queue iTput bandwidth 10% qlimit 50 hfsc(linkshare 128Kb)

Click here to join IndoWLI
Click to join IndoWLI