Sorry Gui but your life experience does not convince me. Of course it
may be accepted by an individual, but I think this artificial limitation
is not a good characteristic for a network focused operating system. I
can image some use cases which requires more than 24h persistent
connection, such as a FTP/telnet connection or a live video streaming of
a web cam. In any case, if there is some possible solution I would
prefer to apply it instead of accept this issue by default.
On 11/06/15 20:39, Gui Iribarren wrote:
On 10/06/15 07:22, Pau wrote:
-------- Forwarded Message -------- Subject: Re:
[qmp-dev]
TunOutTimeout to zero by default Date: Wed, 10 Jun 2015 12:17:32
+0200 From: Pau <pau(a)dabax.net> Reply-To: quick mesh project
development mailing list <qmp-dev(a)mail.qmp.cat> To:
qmp-dev(a)mail.qmp.cat
But still, as far as I know, it is just a
cosmetic thing. There
will be a lot of network interfaces but at least the TCP
connections will persist over the time.
ugh :(
fetching stats through snmp with observium will be affected too
(maaaany interfaces)
I don't like the idea of saying "in qMp
the maximum persistent time
is 1h", after that your connections will be reseted :P
i'd be fine if the maximum persisten time is 24hs.
in deltalibre i set the RA prefix time to 24hs (which makes
networkmanager reconnect to renew the privacy address, every 24hs) and
never bothered me (i.e. i never need a persistent connection for 24hs)
previously, prefix lifetime was set to 1hour, which made network
manager reconnect every 20 mins, and it was exasperating :(
i'm mixing concepts here (RA prefix lifetime has nothing to do with
bmx6 or tunnels) but i'm using it as a real life experience of an
acceptable value for breaking persistent connections
What would fix the problem is to create the
tunnel On demand but
make it persistent forever. Thus the nodes will only have the
tunnels created with the nodes which have been contacted at least
one time (until the next reboot).
What do you think, can you implement this option?
that could be nice, but i think a 24h timeout would suffice.
the question... is that a valid value for current tunOutTimeout????
it doesn't seem so..
root@berni:~# bmx6 -c --version
BMX6-0.1-alpha comPatibility=16 revision=8b0585e84ca8a0110bd4587e...
root@berni:~# bmx6 -c tunOutTimeout 3600000
ERROR : --tunOutTimeout value 3600000 is invalid! Must be 0 <= <value>
<= 100000 !
ERROR : --tunOutTimeout 3600000 # Failed ! ( diff:0 ad:0 val:0
min:0 max:100000 def:60000 OPT_PATCH 2 1 7 )
ERROR apply_stream_opts: invalid argument: 3600000
digging the code:
hna.h:#define MAX_TUN_OUT_TO REGISTER_TASK_TIMEOUT_MAX
and that in turn is defined in:
schedule.h:#define REGISTER_TASK_TIMEOUT_MAX XMIN( 100000, TIME_MAX>>2)
so that max 100 seconds is hardcoded, and would need to be patched.
axel: does 86400000 (24hs) sound like an unreasonable value for that
MAX_TUN_OUT_TO?
i imagine redefining REGISTER_TASK_TIMEOUT_MAX would have extra side
effects, so i'd just do this:
diff --git a/hna.h b/hna.h
index 91ddf1b..11048b4 100644
--- a/hna.h
+++ b/hna.h
@@ -71,7 +71,7 @@ extern struct avl_tree tun_in_tree;
#define ARG_TUN_OUT_TIMEOUT "tunOutTimeout"
#define MIN_TUN_OUT_TO 0
-#define MAX_TUN_OUT_TO REGISTER_TASK_TIMEOUT_MAX
+#define MAX_TUN_OUT_TO 86400000
#define DEF_TUN_OUT_TO 60000
#define DEF_TUN_OUT_PERSIST 1
axel: any thoughts or concerns?
cheers!
Thanks.
On 10/06/15 11:29, Axel Neumann wrote:
> I would not do that!!!
>
> tunOutTimeout=0 means that all possible tunnels are activated by
> default!
>
> For a large qmp cloud with each node offering its own ipv4 and
> ipv6 tunnel endpoints this would be a massive amount of always
> activated tunnels !!! Something very upgly when looking at those
> nodes interface list.
>
> A tunOutTimeout != 0 means they are only activated on demand and
> the list of active tunnel interfaces remains usually small.
>
> A compromise could be to configure a large value (eg
> tunOutTimeout = 3600000) which means that tunnels are setup on
> demand but temporary cut down only every hour.
>
>
> /axel
>
> On 10.06.2015 10:42, Pau wrote:
>> Hello. Following the thread discussion [1], where the user
>> reports problems with streaming cuts, I would propose to add
>> the option tunOutTimeout=0 by default in the qMp system. The
>> advantages of this timeout are just cosmetics (as far as I
>> know). So I would disable it for the moment to avoid persistent
>> connection problems as reported by that user and some others.
>>
>> Find the patch at the end of this mail and feel free to apply
>> it if agreed.
>>
>> [1]
>>
https://mail.dabax.net/pipermail/qmp-users/2015-June/000802.html
>>
>>
>>
diff --git a/packages/qmp-system/files/etc/qmp/qmp_functions.sh
>>
b/packages/qmp-system/files/etc/qmp/qmp_functions.sh index
>> eba5887..450a2a7 100755 ---
>> a/packages/qmp-system/files/etc/qmp/qmp_functions.sh +++
>> b/packages/qmp-system/files/etc/qmp/qmp_functions.sh @@ -727,6
>> +727,7 @@ qmp_configure_bmx6() {
>>
>> qmp_configure_prepare $conf uci set $conf.general="bmx6" + uci
>> set $conf.general.tunOutTimeout=0 uci set
>> $conf.bmx6_config_plugin=plugin uci set
>> $conf.bmx6_config_plugin.plugin=bmx6_config.so
>>
>>
>>
>>
>>
>>
>> _______________________________________________ qmp-dev
>> mailing list qmp-dev(a)mail.qmp.cat
>>
https://mail.dabax.net/cgi-bin/mailman/listinfo/qmp-dev
>>
>
> _______________________________________________ qmp-dev mailing
> list qmp-dev(a)mail.qmp.cat
>
https://mail.dabax.net/cgi-bin/mailman/listinfo/qmp-dev
>
_______________________________________________
Dev mailing list
Dev(a)lists.libre-mesh.org
https://lists.libre-mesh.org/mailman/listinfo/dev
_______________________________________________
Dev mailing list
Dev(a)lists.libre-mesh.org
https://lists.libre-mesh.org/mailman/listinfo/dev