Sunday, 29 March 2015

anycast


Anycast RP


Version History

Version Number
Date
Notes
1
06/18/2001
This document was created.
2
10/15/2001
"Anycast RP Example" section was updated.
3
11/19/2001
Figure 2 was updated.

IP multicast is deployed as an integral component in mission-critical networked applications throughout the world. These applications must be robust, hardened, and scalable to deliver the reliability that users demand.
Using Anycast RP is an implementation strategy that provides load sharing and redundancy in Protocol Independent Multicast sparse mode (PIM-SM) networks. Anycast RP allows two or more rendezvous points (RPs) to share the load for source registration and the ability to act as hot backup routers for each other. Multicast Source Discovery Protocol (MSDP) is the key protocol that makes Anycast RP possible.
The scope of this document is to explain the basic concept of MSDP and the theory behind Anycast RP. It also provides an example of how to deploy Anycast RP.
This document has the following sections:

Multicast Source Discovery Protocol Overview

In the PIM sparse mode model, multicast sources and receivers must register with their local rendezvous point (RP). Actually, the router closest to a source or a receiver registers with the RP, but the key point to note is that the RP "knows" about all the sources and receivers for any particular group. RPs in other domains have no way of knowing about sources located in other domains. MSDP is an elegant way to solve this problem.
MSDP is a mechanism that allows RPs to share information about active sources. RPs know about the receivers in their local domain. When RPs in remote domains hear about the active sources, they can pass on that information to their local receivers. Multicast data can then be forwarded between the domains. A useful feature of MSDP is that it allows each domain to maintain an independent RP that does not rely on other domains, but it does enable RPs to forward traffic between domains. PIM-SM is used to forward the traffic between the multicast domains.
The RP in each domain establishes an MSDP peering session using a TCP connection with the RPs in other domains or with border routers leading to the other domains. When the RP learns about a new multicast source within its own domain (through the normal PIM register mechanism), the RP encapsulates the first data packet in a Source-Active (SA) message and sends the SA to all MSDP peers. Each receiving peer uses a modified Reverse Path Forwarding (RPF) check to forward the SA, until the SA reaches every MSDP router in the interconnected networks—theoretically the entire multicast internet. If the receiving MSDP peer is an RP, and the RP has a (*, G) entry for the group in the SA (there is an interested receiver), the RP creates (S, G) state for the source and joins to the shortest path tree for the source. The encapsulated data is decapsulated and forwarded down the shared tree of that RP. When the last hop router (the router closest to the receiver) receives the multicast packet, it may join the shortest path tree to the source. The MSDP speaker periodically sends SAs that include all sources within the domain of the RP. Figure 1 shows how data would flow between a source in domain A to a receiver in domain E.
MSDP was developed for peering between Internet service providers (ISPs). ISPs did not want to rely on an RP maintained by a competing ISP to provide service to their customers. MSDP allows each ISP to have its own local RP and still forward and receive multicast traffic to the Internet.
Figure 1 MSDP Example: MSDP Shares Source Information Between RPs in Each Domain

Anycast RP Overview

Anycast RP is a useful application of MSDP. Originally developed for interdomain multicast applications, MSDP used for Anycast RP is an intradomain feature that provides redundancy and load-sharing capabilities. Enterprise customers typically use Anycast RP for configuring a Protocol Independent Multicast sparse mode (PIM-SM) network to meet fault tolerance requirements within a single multicast domain.
In Anycast RP, two or more RPs are configured with the same IP address on loopback interfaces. The Anycast RP loopback address should be configured with a 32-bit mask, making it a host address. All the downstream routers should be configured to "know" that the Anycast RP loopback address is the IP address of their local RP. IP routing automatically will select the topologically closest RP for each source and receiver. Assuming that the sources are evenly spaced around the network, an equal number of sources will register with each RP. That is, the process of registering the sources will be shared equally by all the RPs in the network.
Because a source may register with one RP and receivers may join to a different RP, a method is needed for the RPs to exchange information about active sources. This information exchange is done with MSDP.
In Anycast RP, all the RPs are configured to be MSDP peers of each other. When a source registers with one RP, an SA message will be sent to the other RPs informing them that there is an active source for a particular multicast group. The result is that each RP will know about the active sources in the area of the other RPs. If any of the RPs were to fail, IP routing would converge and one of the RPs would become the active RP in more than one area. New sources would register with the backup RP. Receivers would join toward the new RP and connectivity would be maintained.
Note that the RP is normally needed only to start new sessions with sources and receivers. The RP facilitates the shared tree so that sources and receivers can directly establish a multicast data flow. If a multicast data flow is already directly established between a source and the receiver, then an RP failure will not affect that session. Anycast RP ensures that new sessions with sources and receivers can begin at any time.

Anycast RP Example

The main purpose of an Anycast RP implementation is that the downstream multicast routers will "see" just one address for an RP. The example given in Figure 2 shows how the loopback 0 interface of the RPs (RP1 and RP2) is configured with the same 10.0.0.1 IP address. If this 10.0.0.1 address is configured on all RPs as the address for the loopback 0 interface and then configured as the RP address, IP routing will converge on the closest RP. This address must be a host route—note the 255.255.255.255 subnet mask.
The downstream routers must be informed about the 10.0.0.1 RP address. In Figure 2, the routers are configured statically with the ip pim rp-address 10.0.0.1 global configuration command. This configuration could also be accomplished using the Auto-RP or bootstrap router (BSR) features.
The RPs in Figure 2 must also share source information using MSDP. In this example, the loopback 1 interface of the RPs (RP1 and RP2) is configured for MSDP peering. The MSDP peering address must be different than the Anycast RP address.
Figure 2 Anycast RP Configuration
Many routing protocols choose the highest IP address on loopback interfaces for the Router ID. A problem may arise if the router selects the Anycast RP address for the Router ID. We recommend that you avoid this problem by manually setting the Router ID on the RPs to the same address as the MSDP peering address (for example, the loopback 1 address in Figure 2). In Open Shortest Path First (OSPF), the Router ID is configured using the router-id router configuration command. In Border Gateway Protocol (BGP), the Router ID is configured using the bgp router-id router configuration command. In many BGP topologies, the MSDP peering address and the BGP peering address must be the same in order to pass the RPF check. The BGP peering address can be set using the neighbor update-source router configuration command.
The Anycast RP example in the previous paragraphs used IP addresses from RFC 1918. These IP addresses are normally blocked at interdomain borders and therefore are not accessible to other ISPs. You must use valid IP addresses if you want the RPs to be reachable from other domains.

Related Documents

IP Multicast Technology Overview, Cisco white paper
http://www.cisco.com/univercd/cc/td/doc/cisintwk/intsolns/mcst_sol/mcst_ovr.htm
Interdomain Multicast Solutions Using MSDP, Cisco integration solutions document
http://www.cisco.com/univercd/cc/td/doc/cisintwk/intsolns/mcst_p1/mcstmsdp/index.htm
Configuring a Rendezvous Point, Cisco white paper
http://www.cisco.com/univercd/cc/td/doc/cisintwk/intsolns/mcst_sol/rps.htm
"Configuring Multicast Source Discovery Protocol," Cisco IOS IP Configuration Guide, Release 12.2
http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/fipr_c/ipcpt3/1cfmsdp.htm
"Multicast Source Discovery Protocol Commands," Cisco IOS IP Command Reference, Volume 3 of 3: Multicast, Release 12.2
http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/fiprmc_r/1rfmsdp.htm

Saturday, 28 March 2015

PP0E

PPoE requires certain signals and information to establish, accept, control and terminate the session. The basic signalling is shown below.

A PADI (PPPoE Active Discovery Initiation) broadcast signal is sent by the host to the remote devices.

A PADO (PPPoE Active Discovery Offer) signal is sent by the remote device back to the host.

A PADR (PPPoE Active Discovery Request) unicast signal is sent by the host to the remote device.

A PADS (PPPoE Active Discovery Session-Confirmation) is sent by the remote device back to the host.

A PADT (PPPoE Active Discovery Terminate) signal is sent to terminate a PPPoE session. It is the proper way to terminate a session but is not the actual cause for the termination. The cause may be a simple timeout, a manual request by either end, or an out of spec line condition.

Thursday, 26 March 2015

netflow versions

Version 1 (V1) is the original format supported in the initial NetFlow releases.
Version 5 (V5) is an enhancement that adds Border Gateway Protocol (BGP) autonomous system information and flow sequence numbers.
Version 6 (V6) is similar to version 7. This version is not used in the new IOS releases.
Version 7 (V7) is an enhancement that exclusively supports NetFlow with Cisco Catalyst 5000 series switches equipped with a NetFlow feature card (NFFC). V7 is not compatible with Cisco routers.
Version 8 (V8) is an enhancement that adds router-based aggregation schemes.
Version 9 is an enhancement to support different technologies such as Multicast, Internet Protocol Security (IPSec), and Multi Protocol Label Switching (MPLS).
Versions 2, 3 and 4 either were not released.
In Versions 1, 5, 6, and 7, the datagram consists of a header and one or more flow records. The first field of the header contains the version number of the export datagram. Typically, a receiving application that accepts any of the format versions allocates a buffer large enough for the largest possible datagram from any of the format versions and then uses the header to determine how to interpret the datagram. The second field in the header contains the number of records in the datagram and should be used to search through the records.
We recommend that receiving applications perform a sanity check on datagrams to ensure that the datagrams are from a valid NetFlow source. You should first check the size of the datagram to verify that it is at least long enough to contain the version and count fields. You should next verify that the version is valid (1, 5, 6, 7, or 8) and that the number of received bytes is enough for the header and count flow records (using the appropriate version).
Because NetFlow export uses UDP to send export datagrams, it is possible for datagrams to be lost. To determine whether flow export information has been lost, Version 5, 6, 7, and Version 8 headers contain a flow sequence number. The sequence number is equal to the sequence number of the previous datagram plus the number of flows in the previous datagram. After receiving a new datagram, the receiving application can subtract the expected sequence number from the sequence number in the header to derive the number of missed flows.
Datagram format Version 8 offers five router-based aggregation schemes allowing you to summarize export data on the router before the data is exported to the collector. The result is lower bandwidth requirements and reduced platform requirements for NetFlow data collection devices. Router-based aggregation enables on-router aggregation by maintaining one or more extra NetFlow caches with different combinations of fields that determine which traditional flows are grouped together. These extra caches are called aggregation caches. As flows expire from the main flow cache, they are added to each enabled aggregation cache. The normal flow ager process runs on each active aggregation cache the same way it runs on the main cache. On-demand aging is also supported.

When to Select a Particular NetFlow Export Format
Export Format
Select When...
Version 9
You need to export data from various technologies, such as Multicast, DoS, IPv6, and BGP next hop. This format accommodates new NetFlow-supported technologies such as Multicast, MPLS, and BGP next hop.
The Version 9 export format supports export from the main cache and from aggregation caches.
Version 8
You need to export data from aggregation caches. The Version 8 export format is available only for export from aggregation caches.
Version 5
You need to export data from the NetFlow main cache, and you are not planning to support new features.
Version 5 export format does not support export from aggregation caches.
Version 1
You need to export data to a legacy collection system that requires Version 1 export format. Otherwise, do not use Version 1 export format. Use Version 9 or Version 5 export format.

Tuesday, 24 March 2015

show ip mroute Field Descriptions



Examples
The following is sample output from the show ip mroute command for a router operating in dense mode. This command displays the contents of the IP multicast routing table for the multicast group named cbone-audio.
Router# show ip mroute cbone-audio

IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Timers: Uptime/Expires
Interface state: Interface, Next-Hop, State/Mode

(*, 224.0.255.1), uptime 0:57:31, expires 0:02:59, RP is 224.0.0.0, flags: DC
  Incoming interface: Null, RPF neighbor 224.0.0.0, Dvmrp
  Outgoing interface list:
    Ethernet0, Forward/Dense, 0:57:31/0:02:52
    Tunnel0, Forward/Dense, 0:56:55/0:01:28

(192.168.37.100/32, 224.0.255.1), uptime 20:20:00, expires 0:02:55, flags: C
  Incoming interface: Tunnel0, RPF neighbor 10.20.37.33, Dvmrp
  Outgoing interface list:
    Ethernet0, Forward/Dense, 20:20:00/0:02:52

The following is sample output from the show ip mroute command for a router operating in sparse mode:
Router# show ip mroute

IP Multicast Routing Table
Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned
       R - RP-bit set, F - Register flag, T - SPT-bit set
Timers: Uptime/Expires
Interface state: Interface, Next-Hop, State/Mode

(*, 224.0.255.3), uptime 5:29:15, RP is 198.92.37.2, flags: SC
  Incoming interface: Tunnel0, RPF neighbor 10.3.35.1, Dvmrp
  Outgoing interface list:
    Ethernet0, Forward/Sparse, 5:29:15/0:02:57

(198.92.46.0/24, 224.0.255.3), uptime 5:29:15, expires 0:02:59, flags: C
  Incoming interface: Tunnel0, RPF neighbor 10.3.35.1
  Outgoing interface list:
    Ethernet0, Forward/Sparse, 5:29:15/0:02:57

The following is sample output from the show ip mroute command with the summary keyword:
Router# show ip mroute summary

IP Multicast Routing Table
Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned
       R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT
Timers: Uptime/Expires
Interface state: Interface, Next-Hop, State/Mode

(*, 224.255.255.255), 2d16h/00:02:30, RP 171.69.10.13, flags: SJPC

(*, 224.2.127.253), 00:58:18/00:02:00, RP 171.69.10.13, flags: SJC

(*, 224.1.127.255), 00:58:21/00:02:03, RP 171.69.10.13, flags: SJC

(*, 224.2.127.254), 2d16h/00:00:00, RP 171.69.10.13, flags: SJCL
  (128.9.160.67/32, 224.2.127.254), 00:02:46/00:00:12, flags: CLJT
  (129.48.244.217/32, 224.2.127.254), 00:02:15/00:00:40, flags: CLJT
  (130.207.8.33/32, 224.2.127.254), 00:00:25/00:02:32, flags: CLJT
  (131.243.2.62/32, 224.2.127.254), 00:00:51/00:02:03, flags: CLJT
  (140.173.8.3/32, 224.2.127.254), 00:00:26/00:02:33, flags: CLJT
  (171.69.60.189/32, 224.2.127.254), 00:03:47/00:00:46, flags: CLJT

The following is sample output from the show ip mroute command with the active keyword:
Router# show ip mroute active

Active IP Multicast Sources - sending >= 4 kbps

Group: 224.2.127.254, (sdr.cisco.com)
   Source: 146.137.28.69 (mbone.ipd.anl.gov)
     Rate: 1 pps/4 kbps(1sec), 4 kbps(last 1 secs), 4 kbps(life avg)

Group: 224.2.201.241, ACM 97
   Source: 130.129.52.160 (webcast3-e1.acm97.interop.net)
     Rate: 9 pps/93 kbps(1sec), 145 kbps(last 20 secs), 85 kbps(life avg)

Group: 224.2.207.215, ACM 97
   Source: 130.129.52.160 (webcast3-e1.acm97.interop.net)
     Rate: 3 pps/31 kbps(1sec), 63 kbps(last 19 secs), 65 kbps(life avg)

The following example of the show ip mroute EXEC command is displayed when IP multicast MLS is configured. Note that the "H" indicates hardware switched.
Router# show ip mroute

IP Multicast Routing Table
Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned
       R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, H - Hardware
switched
Timers: Uptime/Expires

(*, 229.10.0.1), 00:04:35/00:02:59, RP 0.0.0.0, flags: DJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
         Vlan6, Forward/Dense, 00:00:30/00:02:30
    Vlan5, Forward/Dense, 00:04:35/00:02:30
    Vlan2, Forward/Dense, 00:01:28/00:00:00

(192.0.2.20, 229.10.0.1), 00:04:35/00:02:27, flags: CT
Incoming interface: Vlan2, RPF nbr 0.0.0.0
Outgoing interface list:
    Vlan5, Forward/Dense, 00:03:25/00:00:00, H
    Vlan6, Forward/Dense, 00:00:10/00:00:00, H
Table  describes the significant fields shown in the output.
Table  show ip mroute Field Descriptions 
Field
Description
Flags:
Provides information about the entry.
 D - Dense
Entry is operating in dense mode.
 S - Sparse
Entry is operating in sparse mode.
 C - Connected
A member of the multicast group is present on the directly connected interface.
 L - Local
The router itself is a member of the multicast group.
 P - Pruned
Route has been pruned. The Cisco IOS software keeps this information in case a downstream member wants to join the source.
    R - RP-bit set
Indicates that the (S, G) entry is pointing toward the rendezvous point (RP). The RP is typically a prune state along the shared tree for a particular source.
    F - Register flag
Indicates that the software is registering for a multicast source.
    T - SPT-bit set 
Indicates that packets have been received on the shortest path source tree.
    H - Hardware switched
Indicates the outgoing interface is hardware switched because IP multicast MLS is enabled.
Timers:
Uptime/Expires.
Interface state:
Indicates the state of the incoming or outgoing interface.
http://www.cisco.com/c/dam/en/us/td/i/templates/blank.gifInterface. Indicates the type and number of the interface listed in the incoming or outgoing interface list.
http://www.cisco.com/c/dam/en/us/td/i/templates/blank.gifNext-Hop or VCD. "Next-hop" specifies the IP address of the downstream neighbor. "VCD" specifies the virtual circuit descriptor number. "VCD0" means the group is using the static map virtual circuit.
http://www.cisco.com/c/dam/en/us/td/i/templates/blank.gifState/Mode. "State" indicates that packets will either be forwarded, pruned, or null on the interface depending on whether there are restrictions due to access lists or a time-to-live (TTL) threshold. "Mode" indicates whether the interface is operating in dense, sparse, or sparse-dense mode
(*, 224.0.255.1)
(198.92.37.100/32, 224.0.255.1)
Entry in the IP multicast routing table. The entry consists of the IP address of the source router followed by the IP address of the multicast group. An asterisk (*) in place of the source router indicates all sources.
Entries in the first format are referred to as (*, G) or "star comma G" entries. Entries in the second format are referred to as (S, G) or "S comma G" entries. (*, G) entries are used to build (S, G) entries.
uptime
How long (in hours, minutes, and seconds) the entry has been in the IP multicast routing table.
expires
How long (in hours, minutes, and seconds) until the entry will be removed from the IP multicast routing table on the outgoing interface.
RP
Address of the rendezvous point router. For routers and access servers operating in sparse mode, this address is always 0.0.0.0.
flags:
Information about the entry.
Incoming interface:
Expected interface for a multicast packet from the source. If the packet is not received on this interface, it is discarded.
RPF neighbor
IP address of the upstream router to the source. "Tunneling" indicates that this router is sending data to the rendezvous point encapsulated in Register packets. The hexadecimal number in parentheses indicates to which rendezvous point it is registering. Each bit indicates a different rendezvous point if multiple rendezvous points per group are used.
Dvmrp or Mroute
Indicates whether the RPF information is obtained from the DVMRP routing table or the static mroutes configuration.
Outgoing interface list:
Interfaces through which packets will be forwarded. When the ip pim nbma-mode command is enabled on the interface, the IP address of the PIM neighbor is also displayed.
 Ethernet0
Name and number of the outgoing interface.
    Next hop or VCD
Next hop specifies the IP address of the downstream neighbors. VCD is the virtual circuit descriptor number. VCD0 means the group is using the static-map virtual circuit.
 Forward/Dense
Indicates that packets will be forwarded on the interface if there are no restrictions due to access lists or TTL threshold. Following the slash (/) is the mode in which the interface is operating (dense or sparse).
    Forward/Sparse
Sparse mode interface is in forward mode.
 <time/time>
    (uptime/expiration time)
Per interface, how long (in hours, minutes, and seconds) the entry has been in the IP multicast routing table. Following the slash (/) is how long (in hours, minutes, and seconds) until the entry will be removed from the IP multicast routing table.