Thursday 14 April 2016

L2 MTU and Native VLAN on Brocade

Brocade Openstack VDX Plugin (Non AMPP)

This describes the provisioning of MTU and Native VLANS on L2 interfaces using Brocade (Non
https://github.com/openstack/networking-brocade/tree/master/networking_brocade/vdx
Setup of Openstack Plugin


Fig 1. Setup of VDX Fabric with Compute Nodes

The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric.

  • eth1 on the controller Node is connected to VDX interface (e.g Te 135/0/10)
  • eth1 on the compute Node is connected to VDX interface (e.g Te 136/0/10)
  • NIC (eth1) on the servers (controller,compute ) are part of OVS bridge br1.

Note: To create bridge br1 on compute Nodes and add port eth1 to it.

sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1

In this setup, Virtual Machines would be created on each of the host servers(controller,compute) on a network by the name Green (10.0.0.0/24)

Setup of Openstack Plugin

Look at the setup of Openstack Plugin for L2 Non AMPP

http://rmadapur.blogspot.in/2016/04/l2-non-ampp-brocade-vdx-plugin.html

Openstack Controller Configurations (L2 Non AMPP Setup)

Refer to Configuration setup for [ml2] described in L2 Non AMPP

http://rmadapur.blogspot.in/2016/04/l2-non-ampp-brocade-vdx-plugin.html

Additional configurations that needs to be done to setup mtu and native vlans.

Following additional configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.

If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2]
segment_mtu = 2000
physical_network_mtus = physnet1:2000

[topology]
#connections=<host-name> : <physical network name>: <PORT-SPEED> <NOS PORT>
connections = controller:physnet1:Te:135/0/10, compute:physnet1:Te:136/0/10
mtu = Te:135/0/10:2000,Te:136/0/10:2000
native_vlans = Te:135/0/10:20,Te:136/0/10:20

[topology] - entries

  • Here mtu is set 2000 for both interfaces connected to the servers
  • native_vlan on the interface is set to 20

Openstack CLI Comands

Create Networks

Create a GREEN Network (10.0.0.0/24) using neutron CLI’s. Note down the id of the network created which will be used during subsequent nova boot commands.

user@controller:~$ neutron net-create GREEN_NETWORK
user@controller:~$ neutron subnet-create GREEN_NETWORK 10.0.0.0/24 --name GREEN_SUBNET --gateway=10.0.0.1
user@controller:~/devstack$ neutron net-show GREEN_NETWORK
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-15T05:41:13                  |
| description               |                                      |
| id                        | 21307c5c-b7e9-4bdc-a59c-1527e02080ff |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 2000                                 |
| name                      | GREEN_NETWORK                        |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 50                                    |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | d310745c-2726-4b79-adac-39e76e8d9b29 |
| tags                      |                                      |
| tenant_id                 | 23b20c38f7f14c2a8be5073c198c5178     |
| updated_at                | 2016-04-15T05:41:13                  |
+---------------------------+--------------------------------------+

Check the availability Zones, We will launch one VM each on one of the servers.

user@controller:~$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- controller         |                                        |
| | |- nova-conductor   | enabled :-) 2016-04-11T05:10:06.000000 |
| | |- nova-scheduler   | enabled :-) 2016-04-11T05:10:07.000000 |
| | |- nova-consoleauth | enabled :-) 2016-04-11T05:10:07.000000 |
| nova                  | available                              |
| |- compute            |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:10.000000 |
| |- controller         |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:05.000000 |
+-----------------------+----------------------------------------+

Launching Virtual Machines

Boot VM1 on Server by the name “controller”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM1

Boot VM2 on Server by the name “compute”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}')
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM2

VDX

Following L2 Networking entries would be created on VDX Switches.

sw0# show running-config interface TenGigabitEthernet 135/0/10
interface TenGigabitEthernet 135/0/10
 mtu 2000
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 50
 no switchport trunk tag native-vlan
 switchport trunk native-vlan 20
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!
sw0# show running-config interface TenGigabitEthernet 136/0/10
interface TenGigabitEthernet 136/0/10
 mtu 2000
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 50
 no switchport trunk tag native-vlan
 switchport trunk native-vlan 20
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!
sw0#

Ping between Virtual Machines across Hosts

We should now be able to ping between Virtual Machines on the two host servers.

Wednesday 13 April 2016

L2 AMPP Brocade VDX Plugin

Brocade Openstack VDX Plugin (AMPP)

This describes the setup of Openstack Plugins for Brocade VDX devices for L2 Networking with AMPP
https://github.com/openstack/networking-brocade/tree/master/networking_brocade/vdx
Setup of Openstack Plugin


Fig 1. Setup of VDX Fabric with Compute Nodes

The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric.

  • eth1 on the controller Node is connected to VDX interface (e.g Te 135/0/10)
  • eth1 on the compute Node is connected to VDX interface (e.g Te 136/0/10)
  • NIC (eth1) on the servers (controller,compute ) are part of OVS bridge br1.

Note: To create bridge br1 on compute Nodes and add port eth1 to it.

sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1

In this setup, Virtual Machines would be created on each of the host servers(controller,compute) on a network by the name Green (10.0.0.0/24)

Setup of Openstack Plugin

Pre-requisites

Brocade Plugins require a specific version of ncclient (Net conf library). It can be obtained from the following github location.

git clone https://github.com/brocade/ncclient
cd ncclient
sudo python setup.py install

Install Plugin

git clone https://github.com/openstack/networking-brocade.git --branch=<stable/branch_name>
cd networking-brocade
sudo python setup.py install

Note: branch is an optional if the latest files(master branch) from the repository is required.

Upgrade the Database

Upgrade the database so that Brocade specific table entries are created in neutron database

 neutron-db-manage  --config-file /etc/neutron/neutron.conf  
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Openstack Controller Configurations (L2 Non AMPP Setup)

Following configuration lines needs to be available in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’ to start Brocade VDX Mechanism driver (brocade_vdx_vlan).

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade_vdx_ampp
[ml2_type_vlan]
network_vlan_ranges = physnet1:2:500
[ovs]
bridge_mappings = physnet1:br1

Here,

  • mechanism driver needs to be set to ‘brocade_vdx_ampp’ along with openvswitch.
  • ‘br1’ is the openvswith bridge.
  • ‘2:500’ is the vlan range used

Following configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.

If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2_brocade]
username = admin 
password = password 
address  = 10.37.18.139
ostype   = NOS 
physical_networks = physnet1 
osversion=5.0.0
initialize_vcs = True
nretries = 5
ndelay = 10
nbackoff = 2

Here,
[ml2_brocade] - entries

  • 10.37.18.139 is the VCS Virtual IP (IP for the L2 Fabric).
  • osversion - NOS version on the L2 Fabric.
  • nretries - number of netconf to the switch will be retried in case of failure
  • ndelay - time delay in seconds between successive netconf commands in case of failure

Openstack Compute Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in one of the configuration files used by openvswitch agent.
e.g /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
bridge_mappings = physnet1:br1
network_vlan_ranges = 2:500
tenant_network_type = vlan

Here,
- ‘br1’ is the openvswitch bridge.
- ‘2:500’ is the vlan range used

VDX Configurations

Put all the interfaces connected to compute node in port-profile mode. This is a one-time configuration. (Te 135/0/10 and Te 136/0/10 in the topology above).

sw0(config)#  interface TenGigabitEthernet 135/0/10
sw0(conf-if-te-135/0/10)# port-profile-port
sw0(config)#  interface TenGigabitEthernet 136/0/10
sw0(conf-if-te-136/0/10)# port-profile-port

Openstack CLI Comands

Create Networks

Create a GREEN Network (10.0.0.0/24) using neutron CLI’s. Note down the id of the network created which will be used during subsequent nova boot commands.

user@controller:~$ neutron net-create GREEN_NETWORK
user@controller:~$ neutron subnet-create GREEN_NETWORK 10.0.0.0/24 --name GREEN_SUBNET --gateway=10.0.0.1
user@controller:~$ neutron net-show GREEN_NETWORK
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-12T09:38:45                  |
| description               |                                      |
| id                        | d5c94db7-9040-481c-b33c-252618fb71f8 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | GREEN_NETWORK                        |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 12                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 1217d77d-2638-4c5c-9777-f5cd4f4e5045 |
| tags                      |                                      |
| tenant_id                 | ed2196b380214e6ebcecc7d70e01eba4     |
| updated_at                | 2016-04-12T09:38:45                  |
+---------------------------+--------------------------------------+

Check the availability Zones, We will launch one VM each on one of the servers.

user@controller:~$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- controller         |                                        |
| | |- nova-conductor   | enabled :-) 2016-04-11T05:10:06.000000 |
| | |- nova-scheduler   | enabled :-) 2016-04-11T05:10:07.000000 |
| | |- nova-consoleauth | enabled :-) 2016-04-11T05:10:07.000000 |
| nova                  | available                              |
| |- compute            |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:10.000000 |
| |- controller         |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:05.000000 |
+-----------------------+----------------------------------------+

Launching Virtual Machines

Boot VM1 on Server by the name “controller”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM1

Boot VM2 on Server by the name “compute”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}')
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM2

VDX

Following L2 Networking entries would be created on VDX Switches.


sw0(conf-if-te-136/0/10)# do show port-profile status
Port-Profile              PPID   Activated        Associated MAC  Interface
UpgradedVlanProfile       1      No               None            None                                                                                                
openstack-profile-12      2      Yes              fa16.3ecb.2fab   Te 135/0/10
                                                  fa16.3ee4.b736   Te 136/0/10                                                               

Ping between Virtual Machines across Hosts

We should now be able to ping between Virtual Machines on the two host servers.

Tuesday 12 April 2016

L2-NON AMPP Brocade VDX Plugin

Brocade Openstack VDX Plugin (Non AMPP)

This describes the setup of Openstack Plugins for Brocade VDX devices for L2 Networking (Non AMPP)
https://github.com/openstack/networking-brocade/tree/master/networking_brocade/vdx
Setup of Openstack Plugin


Fig 1. Setup of VDX Fabric with Compute Nodes

The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric. -

  • eth1 on the controller Node is connected to VDX interface (e.g Te 135/0/10)
  • eth1 on the compute Node is connected to VDX interface (e.g Te 136/0/10)
  • NIC (eth1) on the servers (controller,compute ) are part of OVS bridge br1.

Note: To create bridge br1 on compute Nodes and add port eth1 to it.
sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1

In this setup, Virtual Machines would be created on each of the host servers(controller,compute) on a network by the name Green (10.0.0.0/24)

Setup of Openstack Plugin

Pre-requisites

Brocade Plugins require a specific version of ncclient (Net conf library). It can be obtained from the following github location.

git clone https://github.com/brocade/ncclient
cd ncclient
sudo python setup.py install

Install Plugin

git clone https://github.com/openstack/networking-brocade.git --branch=<stable/branch_name>
cd networking-brocade
sudo python setup.py install

Note: branch is an optional if the latest files(master branch) from the repository is required.

Upgrade the Database

Upgrade the database so that Brocade specific table entries are created in neutron database

 neutron-db-manage  --config-file /etc/neutron/neutron.conf  
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Openstack Controller Configurations (L2 Non AMPP Setup)

Following configuration lines needs to be available in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’ to start Brocade VDX Mechanism driver (brocade_vdx_vlan).

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade_vdx_vlan
[ml2_type_vlan]
network_vlan_ranges = physnet1:2:500
[ovs]
bridge_mappings = physnet1:br1

Here,

  • mechanism driver needs to be set to ‘brocade_vdx_vlan’ along with openvswitch.
  • ‘br1’ is the openvswitch bridge.
  • ‘2:500’ is the vlan range used

Following configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.
If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2_brocade]
username = admin 
password = password 
address  = 10.37.18.139
ostype   = NOS 
physical_networks = physnet1 
osversion=5.0.0
initialize_vcs = True
nretries = 5
ndelay = 10
nbackoff = 2

[topology]
#connections=<host-name> : <physical network name>: <PORT-SPEED> <NOS PORT>
connections = controller:physnet1:Te:135/0/10, compute:physnet1:Te:136/0/10

Here,
[ml2_brocade] - entries

  • 10.37.18.139 is the VCS Virtual IP (IP for the L2 Fabric).
  • osversion - NOS version on the L2 Fabric.
  • nretries - number of netconf to the switch will be retried in case of failure
  • ndelay - time delay in seconds between successive netconf commands in case of failure
    [topology] - entries
  • Here physical connectivity between NIC, PhysNet (Host side) and Switch Interfaces are provided

Openstack Compute Configurations (L2 Non AMPP Setup)

Following configuration lines needs to be available in one of the configuration files used by openvswitch agent.
e.g /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
bridge_mappings = physnet1:br1
network_vlan_ranges = 2:500
tenant_network_type = vlan

Here,

  • ‘br1’ is the openvswith bridge.
  • ‘2:500’ is the vlan range used

Openstack CLI Comands

Create Networks

Create a GREEN Network (10.0.0.0/24) using neutron CLI’s. Note down the id of the network created which will be used during subsequent nova boot commands.

user@controller:~$ neutron net-create GREEN_NETWORK
user@controller:~$ neutron subnet-create GREEN_NETWORK 10.0.0.0/24 --name GREEN_SUBNET --gateway=10.0.0.1
user@controller:~$ neutron net-show GREEN_NETWORK
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-12T09:38:45                  |
| description               |                                      |
| id                        | d5c94db7-9040-481c-b33c-252618fb71f8 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | GREEN_NETWORK                        |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 12                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 1217d77d-2638-4c5c-9777-f5cd4f4e5045 |
| tags                      |                                      |
| tenant_id                 | ed2196b380214e6ebcecc7d70e01eba4     |
| updated_at                | 2016-04-12T09:38:45                  |
+---------------------------+--------------------------------------+

Check the availability Zones, We will launch one VM each on one of the servers.

user@controller:~$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- controller         |                                        |
| | |- nova-conductor   | enabled :-) 2016-04-11T05:10:06.000000 |
| | |- nova-scheduler   | enabled :-) 2016-04-11T05:10:07.000000 |
| | |- nova-consoleauth | enabled :-) 2016-04-11T05:10:07.000000 |
| nova                  | available                              |
| |- compute            |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:10.000000 |
| |- controller         |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:05.000000 |
+-----------------------+----------------------------------------+

Launching Virtual Machines

Boot VM1 on Server by the name “controller”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM1

Boot VM2 on Server by the name “compute”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}')
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM2

VDX

Following L2 Networking entries would be created on VDX Switches.

sw0# show running-config interface TenGigabitEthernet 135/0/10
interface TenGigabitEthernet 135/0/10
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 12
 switchport trunk tag native-vlan
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!
sw0# show running-config interface TenGigabitEthernet 136/0/10
interface TenGigabitEthernet 136/0/10
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 12
 switchport trunk tag native-vlan
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!

Ping between Virtual Machines across Hosts

We should now be able to ping between Virtual Machines on the two host servers.

SVI/L3 Networking Brocade VDX Plugin


Brocade Openstack VDX Plugin SVI(L3) Networking

This describes the setup of Openstack Plugins for Brocade VDX devices for L3/SVI networking
https://github.com/openstack/networking-brocade/tree/master/networking_brocade/vdx
Setup of Openstack Plugin

Fig 1. Setup of VDX Fabric with Compute Nodes
The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric.
  • eth1 on the controller Node is connected to VDX interface (e.g Te 135/0/10)
  • eth1 on the compute Node is connected to VDX interface (e.g Te 136/0/10)
  • NIC (eth1) on the servers (controller,compute ) are part of OVS bridge br1.
Note: To create bridge br1 on compute Nodes and add port eth1 to it.
sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1
There are two networks , GREEN(10.0.0.0/24) and RED(9.0.0.0/24).
Virtual Machines are created on both of these networks on each of the hosts. In this setup, we would try to establish routing across the two networks using Brocade L3/SVI Plugin.

Setup of Openstack Plugin

L3/SVI Networking can be setup on top of either L2(with AMPP Support) or L2(without AMPP support).
Please refer to the L2 Networking setup guides

Openstack Configurations (L3/SVI Setup)

Add the following line in ‘/etc/neutron/neutron.conf’ to enable Brocade SVI Plugin
service_plugins = networking_brocade.vdx.non_ampp.ml2driver.l3_router_plugin.BrocadeSVIPlugin
Following additional configuration lines needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.
If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.
[svi]
#List of rbridges on which 
rbridge_ids=135,136
is_vrf_required = True
#enable L3 redundancy if needed by default redundancy is disabled
redundancy=enabled
vrrp_version = 2
vrrp_group_id = 100
vrrp_advertisement_interval = 5
Here [svi]
  • rbridge_ids - list of rbridges on which Virtual Routing instances would be created.
  • redundancy - Is to be set to enabled if Virtual Routing instances have to be created on multiple rbridges.
Note : [ml2_brocade] configuration lines needs to be added as per the L2 Setup (AMPP or Non AMPP)

Openstack CLI Comands

Create Networks

Create a GREEN Network (10.0.0.0/24) using neutron CLI’s. Note down the id of the network created which will be used during subsequent nova boot commands.
user@controller:~$ neutron net-create GREEN_NETWORK
user@controller:~$ neutron subnet-create GREEN_NETWORK 10.0.0.0/24 --name GREEN_SUBNET --gateway=10.0.0.1
user@controller:~$ neutron net-show GREEN_NETWORK
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-12T09:38:45                  |
| description               |                                      |
| id                        | d5c94db7-9040-481c-b33c-252618fb71f8 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | GREEN_NETWORK                        |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 12                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 1217d77d-2638-4c5c-9777-f5cd4f4e5045 |
| tags                      |                                      |
| tenant_id                 | ed2196b380214e6ebcecc7d70e01eba4     |
| updated_at                | 2016-04-12T09:38:45                  |
+---------------------------+--------------------------------------+
Create a RED Network (9.0.0.0/24) using neutron CLI’s. Note down the id of the network created
which will be used during subsequent nova boot commands.
user@controller:~$ neutron net-create RED_NETWORK
user@controller:~$ neutron subnet-create RED_NETWORK 9.0.0.0/24 --name RED_SUBNET --gateway=9.0.0.1
user@controller:~$ neutron net-show RED_NETWORK
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-12T09:39:53                  |
| description               |                                      |
| id                        | c994f6a1-5629-4617-b2af-64be34a744ec |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | RED_NETWORK                          |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 33                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 392fd70e-0c04-44db-8e4f-d5d6f4e1c09b |
| tags                      |                                      |
| tenant_id                 | ed2196b380214e6ebcecc7d70e01eba4     |
| updated_at                | 2016-04-12T09:39:53                  |
+---------------------------+--------------------------------------+
Check the availability Zones, We will launch one VM each on one of the servers.
user@controller:~$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- controller         |                                        |
| | |- nova-conductor   | enabled :-) 2016-04-11T05:10:06.000000 |
| | |- nova-scheduler   | enabled :-) 2016-04-11T05:10:07.000000 |
| | |- nova-consoleauth | enabled :-) 2016-04-11T05:10:07.000000 |
| nova                  | available                              |
| |- compute            |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:10.000000 |
| |- controller         |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:05.000000 |
+-----------------------+----------------------------------------+

Launching Virtual Machines

Boot VM1 on Server by the name “controller”
user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM1
Boot VM2 on Server by the name “compute”
user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}')
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM2
Boot VM3 on Server by the name “controller”
user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/RED_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM3
Boot VM4 on Server by the name “compute”
user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/RED_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM4

Create a Router

Create a Router instance having both the networks (GREEN_NETWORK and RED_NETWORK)
neutron router-create demo-router
neutron router-interface-add demo-router GREEN_SUBNET
neutron router-interface-add demo-router RED_SUBNET

VDX

Following Routing instances would have created on VDX
sw0# show running-config rbridge-id 135 vrf
rbridge-id 135
 vrf openstack-vrf-b076cce6-299d-4499
  rd 0766:0766
  address-family ipv4 unicast
  !
 !
 vrf test
  rd 10:10
 !
!

sw0# show running-config rbridge-id 135 interface ve
rbridge-id 135
 interface Ve 12
  vrf forwarding openstack-vrf-b076cce6-299d-4499
  ip proxy-arp
  ip address 10.0.0.5/24
  vrrp-group 100 version 2
   virtual-ip 10.0.0.1
   advertisement-interval 5
   enable
   preempt-mode
   priority 1
  !
  no shutdown
 !
 interface Ve 33
  vrf forwarding openstack-vrf-b076cce6-299d-4499
  ip proxy-arp
  ip address 9.0.0.6/24
  vrrp-group 100 version 2
   virtual-ip 9.0.0.1
   advertisement-interval 5
   enable
   preempt-mode
   priority 1
  !
  no shutdown
 !
!

Ping between Virtual Machines across Networks

We should now be able to ping between Virtual Machines across Networks (GREEN_Network and RED_NETWORK)

Sunday 10 April 2016

Brocade Openstack VDX Plugin AMPP

Brocade Openstack VDX Plugin (AMPP)

This describes the setup of Openstack Plugins for Brocade VDX devices.



Fig 1. Setup of VDX Fabric with Compute Nodes
The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric. 

eth1 on the compute Node is connected to VDX interface (e.g Te 135/0/10) and eth1 is part of an OVS bridge br1.

Note: To create bridge br1 on compute Nodes and add port eth1 to it.

sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1