0

I'm working on implementing multicasting for contiki. I've got a couple of Ubuntu VMs on a NAT and an eth0 interface.

To one of these VMs is connected a Contiki Border-router (a zigduino) which makes its own interface, tun0. (see also http://anrg.usc.edu/contiki/index.php/RPL_Border_Router )

One of the VMs multicasts udp packets to ff1e:: . I can see on wireshark that every VM receives this multicast packet. My border-routers tun0 interface never sees this udp packet. I'd like to forward all multicast packets received by the VM connected to the border router from the eth0 interface to the tun0 interface, thereby allowing my border-router to see the packet and inject it in his network.

How can I do this in Ubuntu? I'm kinda stuck, tried adding routes but doesn't work.

Addendum: My ifconfig of the vm with the router:

eth0      Link encap:Ethernet  HWaddr 00:50:56:24:dd:45  
      inet addr:192.168.59.131  Bcast:192.168.59.255  Mask:255.255.255.0
      inet6 addr: fe80::250:56ff:fe24:dd45/64 Scope:Link
      inet6 addr: bbbb::2/64 Scope:Global
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:40205 errors:0 dropped:0 overruns:0 frame:0
      TX packets:25436 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:25834566 (25.8 MB)  TX bytes:4091267 (4.0 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:734 errors:0 dropped:0 overruns:0 frame:0
          TX packets:734 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:51481 (51.4 KB)  TX bytes:51481 (51.4 KB)

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:127.0.1.1  P-t-P:127.0.1.1  Mask:255.255.255.255
          inet6 addr: fe80::1/64 Scope:Link
          inet6 addr: aaaa::1/64 Scope:Global
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:7 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:586 (586.0 B)  TX bytes:1046 (1.0 KB)

this is for IPv6! I've tried something like this:

sudo ip -6 route add ff1e::/64 via fe80::1 dev tun0

doesn't work.

Edit:

I tried the below suggestion. My routing now looks like this:

sudo ip -6 route
aaaa::/64 dev tun0  proto kernel  metric 256 
aa00::/8 via bbbb::2 dev eth0  metric 1024 
bbbb::/64 dev eth0  proto kernel  metric 256 
fe80::/64 dev eth0  proto kernel  metric 256 
fe80::/64 dev tun0  proto kernel  metric 256 
ff1e::/64 via fe80::1 dev eth0  metric 1024 

Note that the second route is one that is used by my VMs (which have addresses bbbb::3, bbbb::4, ...) to contact my nodes in the network (which have addresses like aaaa::11:22ff:fe33:4402) and this works. The nodes are connected to the bbbb::2 VM.

However, when a VM publishes to, say, ff1e:101:a::4, my eth0 interface on bbbb::2 detects this but still doesn't forward it to the tun0. Tun0 has a aaaa::1/64 global address but the "sudo ip -6 route add ff1e::/64 via aaaa::1 dev eth0" command gives "RTNETLINK answers: No route to host". Trying to add the full IPv6 multicast address (like so "sudo ip -6 route add ff1e:101:a::4/128 via fe80::1 dev eth0") also produces no results but does add the route.

Edit edit: adding tables after second suggestion:

looci@looci:~$ sudo ip -6 route add ff1e::/64 dev tun0 table local
looci@looci:~$ ip -6 route
aaaa::/64 dev tun0  proto kernel  metric 256 
aa00::/8 via bbbb::2 dev eth0  metric 1024 
bbbb::/64 dev eth0  proto kernel  metric 256 
fe80::/64 dev eth0  proto kernel  metric 256 
fe80::/64 dev tun0  proto kernel  metric 256 
ff1e::/64 via fe80::1 dev eth0  metric 1024 
ff1e:101:a::4 via fe80::1 dev eth0  metric 1024 
looci@looci:~$ ip -6 route show table local
local ::1 via :: dev lo  proto none  metric 0 
local aaaa:: via :: dev lo  proto none  metric 0 
local aaaa::1 via :: dev lo  proto none  metric 0 
local bbbb:: via :: dev lo  proto none  metric 0 
local bbbb::2 via :: dev lo  proto none  metric 0 
local fe80:: via :: dev lo  proto none  metric 0 
local fe80:: via :: dev lo  proto none  metric 0 
local fe80::1 via :: dev lo  proto none  metric 0 
local fe80::250:56ff:fe24:dd45 via :: dev lo  proto none  metric 0 
ff1e::/64 dev tun0  metric 1024 
ff00::/8 dev eth0  metric 256 
ff00::/8 dev tun0  metric 256 
looci@looci:~$ ip -6 route show table main
aaaa::/64 dev tun0  proto kernel  metric 256 
aa00::/8 via bbbb::2 dev eth0  metric 1024 
bbbb::/64 dev eth0  proto kernel  metric 256 
fe80::/64 dev eth0  proto kernel  metric 256 
fe80::/64 dev tun0  proto kernel  metric 256 
ff1e::/64 via fe80::1 dev eth0  metric 1024 
ff1e:101:a::4 via fe80::1 dev eth0  metric 1024 

1 Answers1

0

I'd rather have written sudo ip -6 route add ff1e::/64 via fe80::1 dev eth0 which means every packets received on eth0 in destination to ff1e:: goes to the address fe80::1. But don't forget to enable the IP forwarding in Linux with sudo sysctl -w net.ipv6.conf.all.forwarding=1.

Furthermore, I guess that if you want to inject this multicast udp packet into the 6LoWPAN network you have tho activate the multicast forwarding with UIP_CONF_IPV6_MULTICAST in your rpl-border-router and choose one of the multicast engine with UIP_MCAST6_CONF_ENGINE.

UPDATE:

Then you may try somethings like this:

ip -6 route add ff1e::/64 dev tun0 table local

The complete explanation can be found here.

Community
  • 1
  • 1
Darko P.
  • 497
  • 4
  • 14
  • Thank you. Still doesn't work however. I have checked the parameters and can multicast within the 6lowpan so that is alright I think. I've updated my main post with additional info –  Apr 01 '15 at 07:58
  • Ok then good things. I updated my post equally, I think it can help you further. – Darko P. Apr 01 '15 at 09:16
  • Hmmm, still doesn't appear to work. I've added my tables to the main post. To clarify, tun0 sees no multicast packets –  Apr 01 '15 at 09:26
  • It maybe have then something to do with the type of interface on the host. For example, I've managed to forward RS multicast packets from a node to a VM ubuntu machine through a border-router in transparent mode and connected through a bridge interface (eth0 br0) not a tunnel. I don't know if it has its importance here. – Darko P. Apr 01 '15 at 10:03
  • You mean the tun0 interface or eth0? Do the above routing tables sound fine? I've gone through your link, and I think I get what the issue could be, but can't resolve it. –  Apr 01 '15 at 10:19
  • I mean the tun0 should be maybe a tap0 interface. Bridging with a br0 interface the eth0 with a tap instead of a tun. The tap being at a lower level and operating with the link-layer. – Darko P. Apr 01 '15 at 11:36
  • You may have a look at the 6lbr project [here](https://github.com/cetic/6lbr/wiki/6LBR-Modes) for a more flexible border-router. – Darko P. Apr 01 '15 at 11:57
  • Unfortunately, I can't use another router and don't know how to do what you propose. I have discovered however, that my tunnel interface doesn't even receive multicast pings. So at the VM with tun0 and eth0 doing ping6 ff1e::1 shows ICMP packets only for eth0! Is this significant? –  Apr 01 '15 at 14:41