Search This Blog

How long does a deprecated IPv6 address remain attached to an interface?

This may be obvious to normal folk but it wasn't to me, hence this post to remind me in the future.

The question posed to mine bad self was:

How long does a deprecated IPv6 address remain attached to an interface?

You may already know the answer. But in case you don't...

I packed a bag full of things I'd need - crab paste sandwiches, weak lemon drink, a compass, silk stockings and a plastic carrot - and set off on my journey through the Googles, a bit of oxygen but mainly laughing gas in my lungs.

My first stop, and first sip of weak lemon drink, was at an Oracle manpage:

An IPv6 deprecated address will eventually be deleted when not used,

Excellent! But when?

My next stop pointed out the bleeding obvious (good, old Wiki):

When an address is assigned to an interface it gets the status "preferred", which it holds during its preferred-lifetime. After that lifetime expires the status becomes "deprecated" and no new connections should be made using this address. The address becomes "invalid" after its valid-lifetime also expires

Er, yeah, that actually makes sense. Indeed:

$ ip addr | fgrep -A 1 temporary
    inet6 fd3c:c307:7f95:0:6957:dcf5:f759:e2e9/64 scope global temporary dynamic
       valid_lft 575828sec preferred_lft 56828sec
    inet6 fd3c:c307:7f95:0:d5b5:b1d0:a807:21a9/64 scope global temporary deprecated dynamic
       valid_lft 490031sec preferred_lft 0sec
    inet6 fd3c:c307:7f95:0:10d:8171:ff7f:777f/64 scope global temporary deprecated dynamic
       valid_lft 404233sec preferred_lft 0sec
    inet6 fd3c:c307:7f95:0:4dc0:7c07:c401:490b/64 scope global temporary deprecated dynamic
       valid_lft 318436sec preferred_lft 0sec

When valid_lft = 0 then it's curtains for that address. The default on my computer seems to be to set a valid_lft = 604800 (7 days) that is seven times the preferred_lft = 86400 (24 hours). Not a massive deal but it does mean that after seven days I'd have six or seven deprecated ULA addresses hanging around per interface, and eventually another six or seven deprecated global unicast addresses per interface too, assuming privacy extensions are enabled. Messy.

I know, I know, I can disable ULA if I want to. But I don't want to. So that's that solved.

There is a max_addresses parameter (default 16). I tried to find out if the system would remove the old deprecated addresses or report a failure when this limit is reached, but the bloody stupid thing gave up long before I got near this limit and removed all deprecated addresses far too early, and refused to create more ULAs.. I may visit this again sometime. Of course with this default of 16 I could easily exceed this limit assuming two routers with two global IPv6 addresses are reachable via one NIC, and ULA is enabled - easily achievable using fibre and LTE on one router, for example.

RFC 4862 talks about how deprecated and invalid addresses should be treated but doesn't mention when the node should simply drop the address altogether. It makes sense for an operating system to simply rid itself of the burden as soon as the address has expired.

You may now enjoy the crab paste and put the plastic carrot to good use.

For reference (at time of writing):

valid_lft and preferred_lft can be set/queried in Linux at /proc/sys/net/ipv6/conf/default/temp_valid_lft and /proc/sys/net/ipv6/conf/default/temp_preferred_lft, respectively:

$ cat /proc/sys/net/ipv6/conf/default/temp_*
86400
604800

Roku / Now TV hidden menus

Bit Rate Override
Home x5, Rewind x3, Fast Forward x2

Channel Info
Home x3, Up x2, Left, Right, Left, Right, Left

Developer Settings
Home x3, Up x2, Right, Left, Right, Left, Right

Platform/Wi-Fi Secret Screen
Home x5, Fast Forward, Play, Rewind, Play, Fast Forward

Reboot
Home x5, Up x1, Rewind x2, Fast Forward x2

Secret Screen
Home x5, Fast Forward x3, Rewind x2

Throttling CPU usage with Linux cgroups

There are a number of reasons you may want to throttle rather than limit a process's CPU usage on your system. One very good reason is to keep the CPU temperature down or to simply reduce the amount of energy a certain process uses.

Limiting versus throttling

The term "limit" is nearly always used where throttling is actually required. A good example of why the two are not interchangeable would be the current ISP industry:

Example 1: Sally signs up for super fast broadband (100 Mbps) but hasn't read the small print: she can only download 10 GB of data before her connection is terminated and she has to wait for the next month before she can continue to use her service. Sally's service is not throttled but it is limited.

Example 2: Tony signs up for a basic package (1 Mbps) as he doesn't need to use the Internet a great deal. However he had the good sense to use an unlimited package so that he doesn't hit any usage caps. His router syncs at 100 Mbps but he only receives a 1 Mbps service. The ISPs equipment is throttling his service, but not limiting it.

Example 3: Benedict has signed up for some deal without reading any of the details. He receives a 20 Mbps service and is happy with the speeds. Unfortunately for him after downloading 5 GB of data his download rate drops to 1 Mbps. He has limits on his service which has led to it being throttled.

There are many instances where you may wish to both limit and throttle CPU usage. The former is very easy and well documented, the latter not so much.

Xpra failing on localhost

If you know Xpra then you know you can access a "background" X session remotely. It's reasonable to assume that you may also want to access it locally from time to time.

On Ubuntu Precise I was finding that attempting to access the local session failed silently. I have been using:

$ xpra attach -z0 ssh:user@localhost:99

This produces no output. The server log isn't very forthcoming either:

New connection received
Handshake complete; enabling connection
encoding set to rgb24, client supports ['rgb24', 'jpeg', 'png'], server supports ['rgb24', 'jpeg', 'png']
Unhandled error while processing packet from peer
Traceback (most recent call last):
  File "/usr/lib/xpra/xpra/protocol.py", line 338, in _process_packet
    self._process_packet_cb(self, decoded)
  File "/usr/lib/xpra/xpra/server.py", line 1957, in process_packet
    self._packet_handlers[packet_type](self, proto, packet)
  File "/usr/lib/xpra/xpra/server.py", line 1371, in _process_hello
    f = open(mmap_file, "r+b")
IOError: [Errno 13] Permission denied: '/tmp/xpra.QGv7UA.mmap'
connection lost: empty marker in read queue
Connection lost

The clue is in the .mmap. The server needs to be started with the --no-mmap option.

From the man page:

       --no-mmap
              Disables memory mapped pixel data transfer.  By default it is normally  enabled  automatically
              if  the  server  and  the client reside on the same filesystem namespace.

All good!
My profile on StackExchange