Monday 14 November 2016

Title

Decorator Pattern

Decoration means putting things on or around an entity to make it look more appealing or attractive. This design pattern tries to add more functionality to an object at runtime without having to modify the object itself.

Let’s consider Juice Center, where different kinds of Fruit Juices are made available. Price depends on the Kind of Fruit used for the preparation of Juice. Vendor realizes over a period of time that by adding different toppings like Honey or Dry Fruits he could increase the price of the Juice itself.

enter image description here

Written with StackEdit.

interface FruitJuice {
    void prepare();
    int getPrice();
}

class AppleJuice implements FruitJuice {

    @Override
    public void prepare() {
        System.out.println("Prepare Apple Juice");

    }
    @Override
    public int getPrice() {

        return 10;
    }
}

class OrangeJuice implements FruitJuice {
    @Override
    public void prepare() {
        System.out.println("Prepare Orange Juice");
    }
    @Override
    public int getPrice() {
        return 10;
    }
}

Orange Juice and Apple Juice are the two kind of Juice available at the Juice Center.

abstract class JuiceDecorator implements FruitJuice {

    protected FruitJuice juice;

    JuiceDecorator(FruitJuice fruitJuice) {
        this.juice = fruitJuice;
    }

    public void prepare() {
        this.juice.prepare();
        this.decorate();

    }

    public int getPrice() {

        return juice.getPrice() + this.addCost();
    }

    abstract void decorate();

    abstract int addCost();

}

class HoneyDecorator extends JuiceDecorator {

    HoneyDecorator(FruitJuice fruitJuice) {
        super(fruitJuice);
    }

    @Override
    void decorate() {
        System.out.println("Decorate with Honey");

    }

    @Override
    int addCost() {

        return 5;
    }

}

class DryFruitDecorator extends JuiceDecorator {

    DryFruitDecorator(FruitJuice fruitJuice) {
        super(fruitJuice);
    }

    @Override
    void decorate() {
        System.out.println("Decorate with Dry Fruits");

    }

    @Override
    int addCost() {

        return 30;
    }

}

Two Kinds of Decorators are added , each of these decorators extend from the abstract JuiceDecorator class. It implements the methods decorate() and addCost() which adds the appropriate topping and increases the cost according to the toppings added.


public class DecoratorDemo {

    public static void main(String[] args) {
        FruitJuice appleJuice = new AppleJuice();
        appleJuice.prepare();
        System.out.println("Apple Juice Price : " + appleJuice.getPrice());

        FruitJuice orangeJuice = new OrangeJuice();
        orangeJuice.prepare();
        System.out.println("Orange Juice Price : " + orangeJuice.getPrice());

        FruitJuice honeyDecoratedAppleJuice = new HoneyDecorator(appleJuice);
        honeyDecoratedAppleJuice.prepare();
        System.out.println("Apple Juice with Honey Price : "
                + honeyDecoratedAppleJuice.getPrice());

        FruitJuice dryFruitDecoratedOrangeJuice = new DryFruitDecorator(
                orangeJuice);
        dryFruitDecoratedOrangeJuice.prepare();
        System.out.println("Orange Juice with Dry Fruits Price : "
                + dryFruitDecoratedOrangeJuice.getPrice());

    }

}

Output

Prepare Apple Juice
Apple Juice Price : 10
Prepare Orange Juice
Orange Juice Price : 10
Prepare Apple Juice
Decorate with Honey
Apple Juice with Honey Price : 15
Prepare Orange Juice
Decorate with Dry Fruits
Orange Juice with Dry Fruits Price : 40

Friday 11 November 2016

Iterator Pattern

Iterator Pattern - Behavioral

Pattern primarily used when you want to traverse a collection in a particular order. It helps to separate the algorithm for traversing the collection from the data structure that represents the collection.
enter image description here
Let’s try to understand with an example. We have a collection that stores all the US Presidents till 2016. Suppose we need to traverse this collection in Chronological Order (in the order in which they held the Presidency) and also in Alphabetical Order(as per the name).
Instead putting all the logic to traverse in the data model (collection), we make use of the Iterator pattern.
interface Iterator {
    String next();
    boolean hasNext();
}
Define, an Interface(Iterator) that provides two methods.
- next() - next name in the collection
- hasNext() - returns true if there are more elements.
We, would be creating two Iterators implementing the above interface, one for each kind of traversal.
interface CollectionInterface {
    Iterator getChronologicalIterator();
    Iterator getAlphabecticalIterator();
}

class Collection implements CollectionInterface {
    String[] listOfUSPresidents = { "George Washington", "John Adams", "Thomas Jefferson", "James Madison",
            "James Monroe", "John Quincy Adams", "Andrew Jackson", "Martin Van Buren ", "William Henry Harrison",
            "John Tyler", "James K. Polk", "Zachary Taylor", "Millard Fillmore", "Franklin Pierce", "James Buchanan",
            "Abraham Lincoln", "Andrew Johnson", "Ulysses S. Grant", "Rutherford B. Hayes", "James A. Garfield",
            "Chester A. Arthur", "Grover Cleveland", "Benjamin Harrison", "Grover Cleveland", "William McKinley",
            "Theodore Roosevelt", "William Howard Taft", "Woodrow Wilson", "Warren G. Harding", "Calvin Coolidge",
            "Herbert Hoover", "Franklin Roosevelt", "Harry S. Truman", "Dwight D. Eisenhower", "John F. Kennedy",
            "Lyndon B. Johnson", "Richard M. Nixon", "Gerald Ford", "Jimmy Carter", "Ronald Reagan", "George Bush",
            "Bill Clinton", "George W. Bush", "Barack Obama", };

    @Override
    public Iterator getChronologicalIterator() {

        return new Iterator(){
            int index =0;
            @Override
            public String next() {
                 if(this.hasNext()){
                        return listOfUSPresidents[index++];
                     }
                     return null;
            }

            @Override
            public boolean hasNext() {
             if(index < listOfUSPresidents.length){
                    return true;
                 }
                 return false;
            }

        };
    }


    @Override
    public Iterator getAlphabecticalIterator() {

        return new Iterator(){
            int index =0;
            String []sortedList = getSortedList(listOfUSPresidents);

            @Override
            public String next() {
                 if(this.hasNext()){
                        return sortedList[index++];
                     }
                     return null;
            }

            @Override
            public boolean hasNext() {
             if(index < sortedList.length){
                    return true;
                 }
                 return false;
            }

            String [] getSortedList(String []names){
                String [] sortedList = names;
                Arrays.sort(sortedList);
                return sortedList;
            }

        };
    }
}
Collection stores the list of Presidents, We have two implementations of the Iterators, one representing the chronological order(same as the model), the other iterator internally sorts the list of names to provide alphabetically order.

Key Points

  1. Traverse a collection of Objects(Model) without proliferating the traversing logic inside the collection object.
  2. Provide multiple strategies/algorithms for traversing the same collection/model.
public class IteratorDemo {

    public static void main(String[] args) {
        Collection collection = new Collection();
        // Chronological Iterator
        System.out.println("List of Presidents in Chronolgical Order");
        Iterator chronoIterator = collection.getChronologicalIterator();
        while (chronoIterator.hasNext()) {
            System.out.println(chronoIterator.next());
        }

        // Alphabetical Iterator
        System.out.println("List of Presidents in Aphabetical Order");
        Iterator alhabaItetrator = collection.getAlphabecticalIterator();
        while (alhabaItetrator.hasNext()) {
            System.out.println(alhabaItetrator.next());
        }
    }

}

Output : List of US Presidents

List of Presidents in Chronolgical Order
George Washington
John Adams
Thomas Jefferson
James Madison
James Monroe
John Quincy Adams
Andrew Jackson
Martin Van Buren 
William Henry Harrison
John Tyler
James K. Polk
Zachary Taylor
Millard Fillmore
Franklin Pierce
James Buchanan
Abraham Lincoln
Andrew Johnson
Ulysses S. Grant
Rutherford B. Hayes
James A. Garfield
Chester A. Arthur
Grover Cleveland
Benjamin Harrison
Grover Cleveland
William McKinley
Theodore Roosevelt
William Howard Taft
Woodrow Wilson
Warren G. Harding
Calvin Coolidge
Herbert Hoover
Franklin Roosevelt
Harry S. Truman
Dwight D. Eisenhower
John F. Kennedy
Lyndon B. Johnson
Richard M. Nixon
Gerald Ford
Jimmy Carter
Ronald Reagan
George Bush
Bill Clinton
George W. Bush
Barack Obama

List of Presidents in Aphabetical Order
Abraham Lincoln
Andrew Jackson
Andrew Johnson
Barack Obama
Benjamin Harrison
Bill Clinton
Calvin Coolidge
Chester A. Arthur
Dwight D. Eisenhower
Franklin Pierce
Franklin Roosevelt
George Bush
George W. Bush
George Washington
Gerald Ford
Grover Cleveland
Grover Cleveland
Harry S. Truman
Herbert Hoover
James A. Garfield
James Buchanan
James K. Polk
James Madison
James Monroe
Jimmy Carter
John Adams
John F. Kennedy
John Quincy Adams
John Tyler
Lyndon B. Johnson
Martin Van Buren 
Millard Fillmore
Richard M. Nixon
Ronald Reagan
Rutherford B. Hayes
Theodore Roosevelt
Thomas Jefferson
Ulysses S. Grant
Warren G. Harding
William Henry Harrison
William Howard Taft
William McKinley
Woodrow Wilson
Zachary Taylor
Written with StackEdit.

Tuesday 8 November 2016

Countdown Latch

Countdown Latch

Countdown Latch is one of the other synchronizing mechanism that’s made available in Java. Although it may appear to be very similar to Cyclic Barrier.
Countdown Latch works very similar to a multi-lever latch lock. Access through the lock is possible only when all the levers are exercised (count down)
Java Countdown latch also works on the same principle, one or more threads can be made to wait on the latch using the await() method. This latch is made open only when all the count downs are performed on the latch. The number of count downs requires to open the latch is specified during the initialization of the Countdown Latch.
Few notable differences with Cyclic Barrier.
  • A single thread can perform all the count downs if required as it performs various operations.
  • In case of Cyclic barrier, distinct threads have to arrive and wait at the barrier for it be crossed.
  • Threads performing the countdown do not wait at the latch after the count down.
Create a CountDownLatch with the number of countdowns
CountDownLatch latch = new CountDownLatch(2);
We, will create a simple program that would perform addition of two matrices (2*2). To achieve parallelism, each row of the matrix is worked on by a seperate thread.
class Matrix {
    CountDownLatch latch;
    ExecutorService executorService;
    int[][] result;
    Matrix() {
        this.latch = new CountDownLatch(2);
        executorService = Executors.newFixedThreadPool(2);
    }
}
Here,
latch has been initialized with value two
executor service created with fixed Thread Pool for two threads
int[][] add(int[][] matrix_a, int[][] matrix_b) {
        int[][] result = new int[matrix_a.length][matrix_a[0].length];

        class AddWorker implements Runnable {
            int row_number;

            AddWorker(int row_number) {
                this.row_number = row_number;
            }

            @Override
            public void run() {
                for (int i = 0; i < matrix_a[row_number].length; i++) {
                    result[row_number][i] = matrix_a[row_number][i]
                            + matrix_b[row_number][i];
                }
                latch.countDown();
            }
        }

        executorService.submit(new AddWorker(0));
        executorService.submit(new AddWorker(1));

        try {
            System.out.println("Waiting for Matrix Addition");
            latch.await();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        executorService.shutdown();
        return result;
    }
}
The add() method creates two AddWorker instances. Each instance works on performing addition on one row of the matrix determined by the row_number passed in the constructor.
The add() method calls latch.await() which will cause it to block as it is the only thread currently waiting at the latch.
The latch is open when the two worker threads does a countdown on the latch. Two countdowns are required on the latch.
public class CountDownLatchDemo {

    public static void main(String[] args) {
        int[][] matrix_a = { { 1, 1 }, { 1, 1 } };
        int[][] matrix_b = { { 2, 2 }, { 3, 2 } };

        MatrixCountDown _matrix = new MatrixCountDown();
        int[][] result = _matrix.add(matrix_a, matrix_b);
        printMatrix(result);

    }

    public static void printMatrix(int[][] x) {
        for (int[] res : x) {
            for (int val : res) {
                System.out.print(val + ", ");
            }
            System.out.println(" ");
        }
    }

}
The key difference from a similar application written using cyclic barrier would be
  • Worker threads are not waiting after calling the countdown operations. Only threads that make call to await() operation on the latch wait
  • latch is the initialized to the number of countdown operations and not on the number of threads required to wait on it. In case of Cyclic barrier, barrier is initialized with the number of threads that needs to wait at the barrier.
Output from the Demo application
Waiting for Matrix Addition
3, 3,  
4, 3,  

Cyclic Barrier

Cyclic Barrier

Cyclic Barrier is one of the synchronizing mechanism made available in Java. Lets imagine it like a barrier in the actual sense which require a fixed number of parties to arrive to cross it.
Below line creates a barrier that requires three threads to cross it.
CyclicBarrier barrier = new CyclicBarrier(3);
We, will create a simple program that would perform addition of two matrices (2*2). To achieve parallelism, each row of the matrix is worked on by a seperate thread.
class Matrix {
    CyclicBarrier barrier;
    ExecutorService executorService;
    int[][] result;
    Matrix() {
        this.barrier = new CyclicBarrier(3);
        executorService = Executors.newFixedThreadPool(2);
    }
}
Here,
- barrier waiting on three threads to arrive.
- ExecutorService using a fixedThreadPool to schedule the threads
- int [][]result stores the exectuion result of a matrix operation.
int[][] add(int[][] matrix_a, int[][] matrix_b) {
        class AddWorker implements Worker {
            int row_number;
            AddWorker(int row_number) {
                this.row_number = row_number;
            }
            @Override
            public void run() {
                for (int i = 0; i < matrix_a[row_number].length; i++) {
                    result[row_number][i] = matrix_a[row_number][i] + matrix_b[row_number][i];
                }
                try {
                    System.out.println("Completed for Row  " + row_number);
                    barrier.await();
                } catch (InterruptedException | BrokenBarrierException e) {
                    e.printStackTrace();
                }
            }
        }
        result = new int[matrix_a.length][matrix_a[0].length];
        executorService.submit(new AddWorker(0));
        executorService.submit(new AddWorker(1));
        try {
            System.out.println("Waiting for Matrix Addition");
            barrier.await();
        } catch (InterruptedException | BrokenBarrierException e) {
            e.printStackTrace();
        }
        return result;
    }
The add() method creates two AddWorker instances. Each instance works on performing addition on one row of the matrix determined by the row_number passed in the constructor.
The add() method calls barrier.await() which will cause it to block as it is the only thread currently waiting at the barrier.
The barrier is overcome only when both the AddWorker instances also call barrier.await().
public class CyclicBarrierDemo {
    public static void main(String[] args) {

        int[][] matrix_a = { { 1, 1 }, { 1, 1 } };
        int[][] matrix_b = { { 2, 2 }, { 3, 2 } };

        Matrix _matrix = new Matrix();
        int[][] result = _matrix.add(matrix_a, matrix_b);
        printMatrix(result);

        int[][] matrix_c = { { 10, 10 }, { 12, 11 } };
        int[][] matrix_d = { { 22, 22 }, { 13, 12 } };

        result = _matrix.add(matrix_c, matrix_d);
        printMatrix(result);
    }

    public static void printMatrix(int[][] x) {
        for (int[] res : x) {
            for (int val : res) {
                System.out.print(val + ", ");
            }
            System.out.println(" ");
        }
    }
}
Demo Application performs addition of two matrices using the add() method.
Since barriers can be reused. We can call the add() method mutliple times without having to reset the synchronizer.

Output of the Matrix Operation.


Waiting for Matrix Addition
Completed for Row  0
Completed for Row  1
3, 3,  
4, 3,  
Waiting for Matrix Addition
Completed for Row  0
Completed for Row  1
32, 32,  
25, 23,  

Thursday 12 May 2016

Openstack Docker Integration with VDX

Openstack Docker Integration with VDX

Openstack Kuryr (Docker) Integration with Brocade VDX (AMPP)

Openstack kuryr is integrated with neutron. Kuryr provides a remote driver as per the Docker Container Networking Model (CNM).
Kuryr driver translates the libnetwork callbacks to appropriate neutron calls.

Here we are going to show case the integration of Kuryr with Brocade VDX Device

enter image description here

There are two hosts controller (10.37.18.158) and compute (10.37.18.157) which are part of the Docker swarm. These hosts also function as openstack nodes.
They are connected to VDX Fabric on the interfaces Te 135/0/10 and Te 136/0/10 respectively

Docker Swarm

Docker swarm is setup having two nodes controller (10.37.18.158) and compute (10.37.18.157) as seen from the docker_swarm info details.

root@controller:~# docker_swarm info
Nodes: 2
 compute: 10.37.18.157:2375
  └ Status: Healthy
  └ Containers: 3
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 12.31 GiB
  └ Labels: executiondriver=, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-05-12T11:37:31Z
  └ ServerVersion: 1.12.0-dev
 controller: 10.37.18.158:2375
  └ Status: Healthy
  └ Containers: 4
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 16.44 GiB
  └ Labels: executiondriver=, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-05-12T11:37:23Z
  └ ServerVersion: 1.12.0-dev

Setup of Openstack Plugin

Pre-requisites

Brocade Plugins require a specific version of ncclient (Net conf library). It can be obtained from the following github location.

git clone https://github.com/brocade/ncclient
cd ncclient
sudo python setup.py install

Install Brocade Plugin

git clone https://github.com/openstack/networking-brocade.git --branch=<stable/branch_name>
cd networking-brocade
sudo python setup.py install

Note: branch is an optional if the latest files(master branch) from the repository is required.

Upgrade the Database

Upgrade the database so that Brocade specific table entries are created in neutron database

 neutron-db-manage  --config-file /etc/neutron/neutron.conf  
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Openstack Controller Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’ to start Brocade VDX Mechanism driver (brocade_vdx_vlan).

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade_vdx_ampp
[ml2_type_vlan]
network_vlan_ranges = physnet1:2:500
[ovs]
bridge_mappings = physnet1:br1

Here,

  • mechanism driver needs to be set to ‘brocade_vdx_ampp’ along with openvswitch.
  • ‘br1’ is the openvswith bridge.
  • ‘2:500’ is the vlan range used

Following configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.

If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2_brocade]
username = admin 
password = password 
address  = 10.37.18.139
ostype   = NOS 
physical_networks = physnet1 
osversion=5.0.0
initialize_vcs = True
nretries = 5
ndelay = 10
nbackoff = 2

Here,
[ml2_brocade] - entries

  • 10.37.18.139 is the VCS Virtual IP (IP for the L2 Fabric).
  • osversion - NOS version on the L2 Fabric.
  • nretries - number of netconf to the switch will be retried in case of failure
  • ndelay - time delay in seconds between successive netconf commands in case of failure

Openstack Compute Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in one of the configuration files used by openvswitch agent.
e.g /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
bridge_mappings = physnet1:br1
network_vlan_ranges = 2:500
tenant_network_type = vlan

Here,

  • ‘br1’ is the openvswitch bridge.
  • ‘2:500’ is the vlan range used

VDX Configurations

Put all the interfaces connected to compute node in port-profile mode. This is a one-time configuration. (Te 135/0/10 and Te 136/0/10 in the topology above).

sw0(config)#  interface TenGigabitEthernet 135/0/10
sw0(conf-if-te-135/0/10)# port-profile-port
sw0(config)#  interface TenGigabitEthernet 136/0/10
sw0(conf-if-te-136/0/10)# port-profile-port

Setup of Kuryr

Install Kuryr project on both compute and controller (each of the the host nodes)

sudo pip install https://github.com/openstack/kuryr.git
sudo pip install -r requirements.txt
sudo ./scripts/run_kuryr.sh

Update the ‘"/etc/kuryr/kuryr.conf’ to contain the following lines, kuryr driver is run in the global scope and neutron_uri is provided of the neutron server. In this case the ip address is that of the controller node (10.37.18.158)

[DEFAULT]
capability_scope = global

[neutron_client]
# Neutron URL for accessing the network service. (string value)
neutron_uri = http://10.37.18.158:9696

Restart both the remote driver(stop the one started earlier in the above step) and the docker service

sudo ./scripts/run_kuryr.sh
sudo service docker restart

Docker CLI command

Create Network

Create a Docker Network called “black_network” on the docker swarm having the subnet 92.16.1.0/24

root@controller:~# docker_swarm network create --driver kuryr --subnet=92.16.1.0/24 --gateway=92.16.1.1   black_network
2e36e5ac17f2d4a3534678e58bc4920dbcd8653919a83ad52cbaa62057297a84

This creates a neutron network with segmentation id (vlan 43)

root@controller:~# neutron net-show kuryr-net-2e36e5ac
+---------------------------+----------------------------------------------------+
| Field                     | Value                                              |
+---------------------------+----------------------------------------------------+
| admin_state_up            | True                                               |
| availability_zone_hints   |                                                    |
| availability_zones        | nova                                               |
| created_at                | 2016-05-12T11:16:55                                |
| description               |                                                    |
| id                        | 23beebb7-c4ec-41be-a12a-96f897b1dace               |
| ipv4_address_scope        |                                                    |
| ipv6_address_scope        |                                                    |
| mtu                       | 1500                                               |
| name                      | kuryr-net-2e36e5ac                                 |
| port_security_enabled     | True                                               |
| provider:network_type     | vlan                                               |
| provider:physical_network | physnet1                                           |
| provider:segmentation_id  | 43                                                 |
| router:external           | False                                              |
| shared                    | False                                              |
| status                    | ACTIVE                                             |
| subnets                   | 5072db88-54be-4be0-a39b-f52b60a674ef               |
| tags                      | kuryr.net.uuid.uh:bcd8653919a83ad52cbaa62057297a84 |
|                           | kuryr.net.uuid.lh:2e36e5ac17f2d4a3534678e58bc4920d |
| tenant_id                 | 1035ac77d5904b0184af843e58c37665                   |
| updated_at                | 2016-05-12T11:16:56                                |
+---------------------------+----------------------------------------------------+

This also creates a port-profile on the Brocade switch with appropriate parameters.

sw0(config)# do show running-config port-profile openstack-profile-43
port-profile openstack-profile-43
 vlan-profile
  switchport
  switchport mode trunk
  switchport trunk allowed vlan add 43
 !
!
port-profile openstack-profile-43 activate

Create Docker Containers

Create docker containers based on the busybox image on both the nodes in the docker_swarm
’black_1’ on the compute node (10.37.18.157) and ‘black_2’ on the controller node (10.37.18.158)

root@controller:~# docker_swarm run -itd --name=black_1 --env="constraint:node==compute" --net=black_network busybox
8079c6f22d8985307541d8fb75b1296708638a9150e0334f2155572dba582176
root@controller:~# docker_swarm run -itd --name=black_2 --env="constraint:node==controller" --net=black_network busybox
f8b4257abcf39f3e2d45886d61663027208b6596555afd56f3e4d8e45d641759

Creation of docker containers result in the application of port-profile (openstack-profile-43) on the interfaces connected to the host servers (Te 135/0/10 and Te 136/0/10) respectively

sw0(config)# do show port-profile status
Port-Profile           PPID        Activated        Associated MAC        Interface
UpgradedVlanProfile    1           No               None                  None
openstack-profile-43   2           Yes              fa16.3e2b.38b6        Te 135/0/10
                                                    fa16.3ebf.796c        Te 136/0/10
                                                    fa16.3ed6.7f0b        Te 135/0/10

Now the network connectivity has been established between two containers (black_1 and black_2) running on two different hosts in the docker swarm. Traffic between these two containers transit using the Brocade VDX Fabric.

Container Trace displays the connectivity as seen from the Brocade VCS Fabric. This provides details about like container Name, Host Network Name, VLAN ID, NIC details.

sw0:FID128:root> container_trace 
+---------+---------------+------+--------------+------------------+-----------------------+----------------------+
| Name    | Host Network  | Vlan | Host IP      | Switch Interface | Container IPv4Address | Container MacAddress |
+---------+---------------+------+--------------+------------------+-----------------------+----------------------+
| black_1 | black_network | 43   | 10.37.18.157 | Te 136/0/10      | 92.16.1.2/24          | fa:16:3e:bf:79:6c    |
| black_2 | black_network | 43   | 10.37.18.158 | Te 135/0/10      | 92.16.1.3/24          | fa:16:3e:d6:7f:0b    |
+---------+---------------+------+--------------+------------------+-----------------------+----------------------+

Ping between Containers

Attach to one of the containers(black_1) and ping to the other container (black_2)

root@controller:~# docker_swarm attach black_1
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:BF:79:6C
          inet addr:92.16.1.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:febf:796c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:52 errors:0 dropped:14 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5956 (5.8 KiB)  TX bytes:738 (738.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # ping 92.16.1.3
PING 92.16.1.3 (92.16.1.3): 56 data bytes
64 bytes from 92.16.1.3: seq=0 ttl=64 time=1.825 ms
64 bytes from 92.16.1.3: seq=1 ttl=64 time=0.819 ms
64 bytes from 92.16.1.3: seq=2 ttl=64 time=0.492 ms
64 bytes from 92.16.1.3: seq=3 ttl=64 time=0.458 ms
64 bytes from 92.16.1.3: seq=4 ttl=64 time=0.489 ms
64 bytes from 92.16.1.3: seq=5 ttl=64 time=0.480 ms
64 bytes from 92.16.1.3: seq=6 ttl=64 time=0.438 ms
64 bytes from 92.16.1.3: seq=7 ttl=64 time=0.501 ms

Openstack Docker Integration with VDX

Openstack Kuryr (Docker) Integration with Brocade VDX (AMPP)

Openstack kuryr is integrated with neutron. Kuryr provides a remote driver as per the contatiner Networking Model. Kuryr driver translates the libnetwork callbacks to appropriate neutron calls.
Here we are going to show case the integration of Kuryr with Brocade VDX Device

Setup of Openstack Plugin

Pre-requisites

Brocade Plugins require a specific version of ncclient (Net conf library). It can be obtained from the following github location.

git clone https://github.com/brocade/ncclient
cd ncclient
sudo python setup.py install

Install Brocade Plugin

git clone https://github.com/openstack/networking-brocade.git --branch=<stable/branch_name>
cd networking-brocade
sudo python setup.py install

Note: branch is an optional if the latest files(master branch) from the repository is required.

Upgrade the Database

Upgrade the database so that Brocade specific table entries are created in neutron database

 neutron-db-manage  --config-file /etc/neutron/neutron.conf  
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Openstack Controller Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’ to start Brocade VDX Mechanism driver (brocade_vdx_vlan).

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade_vdx_ampp
[ml2_type_vlan]
network_vlan_ranges = physnet1:2:500
[ovs]
bridge_mappings = physnet1:br1

Here,

  • mechanism driver needs to be set to ‘brocade_vdx_ampp’ along with openvswitch.
  • ‘br1’ is the openvswith bridge.
  • ‘2:500’ is the vlan range used

Following configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.

If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2_brocade]
username = admin 
password = password 
address  = 10.37.18.139
ostype   = NOS 
physical_networks = physnet1 
osversion=5.0.0
initialize_vcs = True
nretries = 5
ndelay = 10
nbackoff = 2

Here,
[ml2_brocade] - entries

  • 10.37.18.139 is the VCS Virtual IP (IP for the L2 Fabric).
  • osversion - NOS version on the L2 Fabric.
  • nretries - number of netconf to the switch will be retried in case of failure
  • ndelay - time delay in seconds between successive netconf commands in case of failure

Openstack Compute Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in one of the configuration files used by openvswitch agent.
e.g /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
bridge_mappings = physnet1:br1
network_vlan_ranges = 2:500
tenant_network_type = vlan

Here,

  • ‘br1’ is the openvswitch bridge.
  • ‘2:500’ is the vlan range used

VDX Configurations

Put all the interfaces connected to compute node in port-profile mode. This is a one-time configuration. (Te 135/0/10 and Te 136/0/10 in the topology above).

sw0(config)#  interface TenGigabitEthernet 135/0/10
sw0(conf-if-te-135/0/10)# port-profile-port
sw0(config)#  interface TenGigabitEthernet 136/0/10
sw0(conf-if-te-136/0/10)# port-profile-port

Setup of Kuryr

Install Kuryr project on both compute and controller (each of the the host nodes)

sudo pip install https://github.com/openstack/kuryr.git
sudo pip install -r requirements.txt
sudo ./scripts/run_kuryr.sh

Update the ‘"/etc/kuryr/kuryr.conf’ to contain the following lines, kuryr driver is run in the global scope and neutron_uri is provided of the neutron server.

[DEFAULT]
capability_scope = global

[neutron_client]
# Neutron URL for accessing the network service. (string value)
neutron_uri = http://10.37.18.158:9696

Restart both the remote driver(stop the one started earlier in the above step) and the docker service

sudo ./scripts/run_kuryr.sh
sudo service docker restart

Docker CLI command

Create Network

Create a Docker Network called “black_network” on the docker swarm having the subnet 92.16.1.0/24

root@controller:~# docker_swarm network create --driver kuryr --subnet=92.16.1.0/24 --gateway=92.16.1.1   black_network
2e36e5ac17f2d4a3534678e58bc4920dbcd8653919a83ad52cbaa62057297a84



This creates a neutron network with segmentation id (vlan) 43

root@controller:~# neutron net-show kuryr-net-2e36e5ac
+---------------------------+----------------------------------------------------+
| Field                     | Value                                              |
+---------------------------+----------------------------------------------------+
| admin_state_up            | True                                               |
| availability_zone_hints   |                                                    |
| availability_zones        | nova                                               |
| created_at                | 2016-05-12T11:16:55                                |
| description               |                                                    |
| id                        | 23beebb7-c4ec-41be-a12a-96f897b1dace               |
| ipv4_address_scope        |                                                    |
| ipv6_address_scope        |                                                    |
| mtu                       | 1500                                               |
| name                      | kuryr-net-2e36e5ac                                 |
| port_security_enabled     | True                                               |
| provider:network_type     | vlan                                               |
| provider:physical_network | physnet1                                           |
| provider:segmentation_id  | 43                                                 |
| router:external           | False                                              |
| shared                    | False                                              |
| status                    | ACTIVE                                             |
| subnets                   | 5072db88-54be-4be0-a39b-f52b60a674ef               |
| tags                      | kuryr.net.uuid.uh:bcd8653919a83ad52cbaa62057297a84 |
|                           | kuryr.net.uuid.lh:2e36e5ac17f2d4a3534678e58bc4920d |
| tenant_id                 | 1035ac77d5904b0184af843e58c37665                   |
| updated_at                | 2016-05-12T11:16:56                                |
+---------------------------+----------------------------------------------------+

This creates a port-profile on the switch with appropriate parameters.

sw0(config)# do show running-config port-profile openstack-profile-43
port-profile openstack-profile-43
 vlan-profile
  switchport
  switchport mode trunk
  switchport trunk allowed vlan add 43
 !
!
port-profile openstack-profile-43 activate

Create Docker Containers

Create a docker container on both the nodes in the docker_swarm

root@controller:~# docker_swarm run -itd --name=black_1 --env="constraint:node==compute" --net=black_network busybox
8079c6f22d8985307541d8fb75b1296708638a9150e0334f2155572dba582176
root@controller:~# docker_swarm run -itd --name=black_2 --env="constraint:node==controller" --net=black_network busybox
f8b4257abcf39f3e2d45886d61663027208b6596555afd56f3e4d8e45d641759

sw0(config)# do show port-profile status
Port-Profile           PPID        Activated        Associated MAC        Interface
UpgradedVlanProfile    1           No               None                  None
openstack-profile-43   2           Yes              fa16.3e2b.38b6        Te 135/0/10
                                                    fa16.3ebf.796c        Te 136/0/10
                                                    fa16.3ed6.7f0b        Te 135/0/10

Ping between Containers

root@controller:~# docker_swarm attach black_1
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:BF:79:6C
          inet addr:92.16.1.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:febf:796c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:52 errors:0 dropped:14 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5956 (5.8 KiB)  TX bytes:738 (738.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # ping 92.16.1.3
PING 92.16.1.3 (92.16.1.3): 56 data bytes
64 bytes from 92.16.1.3: seq=0 ttl=64 time=1.825 ms
64 bytes from 92.16.1.3: seq=1 ttl=64 time=0.819 ms
64 bytes from 92.16.1.3: seq=2 ttl=64 time=0.492 ms
64 bytes from 92.16.1.3: seq=3 ttl=64 time=0.458 ms
64 bytes from 92.16.1.3: seq=4 ttl=64 time=0.489 ms
64 bytes from 92.16.1.3: seq=5 ttl=64 time=0.480 ms
64 bytes from 92.16.1.3: seq=6 ttl=64 time=0.438 ms
64 bytes from 92.16.1.3: seq=7 ttl=64 time=0.501 ms

Thursday 5 May 2016

Brocade Docker Plugin

Brocade Docker Plugin

This describes the Brocade Docker Plugin which functions as remote libnetwork Driver.
It automates the provisioning of Brocade IP Fabric based on the life cycle of Docker Containers.

enter image description here

Fig 1. Docker Swarm nodes connected to Brocade IP Fabric.

Here, there are two hosts controller (10.37.18.158) and compute (10.37.18.157) which are part of the Docker swarm.
They are connected to Leaf switches ,10.37.18.135 and 10.37.18.136 respectively.

Key Aspects

Brocade Plugin functions as a global libnetwork remote driver within the Docker swarm.It is based on the new Container Network Model.

Docker networks are isolated using VLANs on the host-servers and the corresponding VLANs are provisioned on the Brocade IP Fabrics.

Brocade IP Fabric provisioning is automated and integrated with the lifecyle of containers. Tunnels between the leaf switches are only established when there are at-least two containers on different hosts on the same network. This is an important aspect as micro-services appear and disappear frequently in the container environment. Close integration of Brocade IP Fabrics with container life cycle helps in optimum usage of Network resources in such environments.

Brocade also provides container tracing functionality on its Brocade IP Fabric switches. Container tracing can be used to see the networking details like VLAN and interface details between the hosts in the Docker swarm and the leaf switches in the Brocade IP Fabric.

Brocade Plugin Operations

Initial Setup

Docker swarm(cluster of docker hosts) output displaying the two hosts in the swarm, controller(10.37.18.158) and compute (10.37.18.157)

root@controller:~# docker -H :4000 info
Nodes: 2
 compute: 10.37.18.157:2375
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 12.31 GiB
 controller: 10.37.18.158:2375
  └ Status: Healthy
  └ Containers: 4
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 16.44 GiB

Container Tracer output as seen from one of the leaf switches in the Brocade IP Fabric. All fields are empty as there are no containers launched in the Docker swarm.

sw0:FID128:root> container_trace
+------+--------------+------+---------+----------+------------------+-----------------------+
----------------------+
| Name | Host Network | Vlan | Host IP | Host Nic | Switch Interface | Container IPv4Address |
 Container MacAddress |
+------+--------------+------+---------+----------+------------------+-----------------------+
----------------------+
+------+--------------+------+---------+----------+------------------+-----------------------+
----------------------+

No tunnel is established between leaf switches of Brocade IP Fabric as there are no containers launched in the docker swarm.

Welcome to the Brocade Network Operating System Software
admin connected from 172.22.10.83 using ssh on sw0
sw0# show tunnel brief
sw0#

Container Startup

Create a network named ‘red_network’ using the brocade libnetwork driver and create two busybox containers on each of the host servers using the newly created network.

root@controller:~# docker -H :4000 network create --driver brcd-global  --subnet=21.16.1.0/24
--gateway=21.16.1.1   red_network
4b722b1f90e64a986df8973aae6edf837193640161611805339676f1e6768f84

root@controller:~# docker -H :4000 run -itd --name=test1 --env="constraint:node==controller"
--net=red_network busybox
932a039045acc05e101d1196d9152e4391b0e62a9cf91c6b83b9fc9893738c6b

root@controller:~# docker -H :4000 run -itd --name=test2  --env="constraint:node==compute" 
--net=red_network busybox
1a32732651bf970ce60b027644c6ff48e8e3490d5b60644f75fb5785bfba6219

Brocade Plugin provisions VLAN on the host server and does the necessary configuration on the switch interfaces connected to the host server.

Container tracer on the Brocade switch displays the newly created containers with details like Network name (red_network), VLAN(2002), Host NIC and Switch Interface, Container IP and Mac Address.

sw0:FID128:root> container_trace
+-------+--------------+------+--------------+----------+------------------+-----------------------+
----------------------+
| Name  | Host Network | Vlan | Host IP      | Host Nic | Switch Interface | Container IPv4Address |
 Container MacAddress |
+-------+--------------+------+--------------+----------+------------------+-----------------------+
----------------------+
| test2 | red_network  | 2002 | 10.37.18.157 | eth2     | Te 136/0/10      | 21.16.1.3/24          |
 00:16:3e:04:95:e1    |
| test1 | red_network  | 2002 | 10.37.18.158 | eth4     | Te 135/0/10      | 21.16.1.2/24          |
 00:16:3e:4f:a4:49    |
+-------+--------------+------+--------------+----------+------------------+-----------------------+
----------------------+

Container tracer output would be useful for the network administrator for tracing the flow of traffic between containers as it transits through Brocade switches.

Tunnel gets established between the two leaf switches in the Brocade IP Fabric as two containers (test1 and test2) are launched on the two hosts in the docker swarm.

Tunnel output on the leaf switches of the Brocade IP Fabric indicates that tunnel has been established between the leaf switches connected to the two hosts in the docker swarm.

sw0# show tunnel brief
Tunnel 61441, mode VXLAN, rbridge-ids 135
Admin state up, Oper state up
Source IP 54.54.54.0, Vrf default-vrf
Destination IP 54.54.54.1

VLAN 2002 is received on Te 135/0/10 - interface connected to eth4 on host 10.37.18.158.
This VLAN is auto-mapped to VNI 2002 on the Brocade IP Fabric.

sw0# show vlan brief

VLAN   Name      State  Ports           Classification
(F)-FCoE                                                    (u)-Untagged
(R)-RSPAN                                                   (c)-Converged
(T)-TRANSPARENT                                             (t)-Tagged
===== ========= ====== =============== ====================
2002   VLAN2002  ACTIVE Te 135/0/10(t)
                        Tu 61441(t)     vni 2002

Ping between Containers

Container test1(21.16.1.2) on host (10.37.18.158) is able to communicate with Container test2 (21.16.1.3) on host (10.37.18.157).

root@controller:~# docker -H :4000 attach test1
/ # ping 21.16.1.3
PING 21.16.1.3 (21.16.1.3): 56 data bytes
64 bytes from 21.16.1.3: seq=0 ttl=64 time=0.656 ms
64 bytes from 21.16.1.3: seq=1 ttl=64 time=0.337 ms
64 bytes from 21.16.1.3: seq=2 ttl=64 time=0.358 ms
64 bytes from 21.16.1.3: seq=3 ttl=64 time=0.313 ms
64 bytes from 21.16.1.3: seq=4 ttl=64 time=0.324 ms
^C
--- 21.16.1.3 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.313/0.397/0.656 ms

Tunnel statistics showing the increasing trend of packets which indicates that the container traffic is transiting through Brocade IP Fabrics.

sw0# show tunnel statistics
Tnl ID   RX packets      TX packets      RX bytes        TX bytes
======== =============== =============== =============== ================
61441    3               3               (NA)            414
sw0# show tunnel statistics
Tnl ID   RX packets      TX packets      RX bytes        TX bytes
======== =============== =============== =============== ================
61441    7               7               (NA)            1022

Container Shutdown

Exit from Container 'test1 and explicit shutdown of the the other container test2

132 packets transmitted, 132 packets received, 0% packet loss
round-trip min/avg/max = 0.222/0.286/0.350 ms
/ # exit

root@controller:~# docker -H :4000 stop test2

Container shutdown results in the tear-down of tunnels between the leaf switches in the Brocade IP Fabric and the same is reflected by an empty output in the container trace output.

sw0# show tunnel brief

sw0:FID128:root> container_trace
+------+--------------+------+---------+----------+------------------+-----------------------+----------------------+
| Name | Host Network | Vlan | Host IP | Host Nic | Switch Interface | Container IPv4Address | Container MacAddress |
+------+--------------+------+---------+----------+------------------+-----------------------+----------------------+
+------+--------------+------+---------+----------+------------------+-----------------------+----------------------+

Brocade remote libnetwork driver can also works with Brocade VDX(Ethernet)Fabric in addition to automation Brocade IP Fabric.

Thursday 14 April 2016

L2 MTU and Native VLAN on Brocade

Brocade Openstack VDX Plugin (Non AMPP)

This describes the provisioning of MTU and Native VLANS on L2 interfaces using Brocade (Non
https://github.com/openstack/networking-brocade/tree/master/networking_brocade/vdx
Setup of Openstack Plugin


Fig 1. Setup of VDX Fabric with Compute Nodes

The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric.

  • eth1 on the controller Node is connected to VDX interface (e.g Te 135/0/10)
  • eth1 on the compute Node is connected to VDX interface (e.g Te 136/0/10)
  • NIC (eth1) on the servers (controller,compute ) are part of OVS bridge br1.

Note: To create bridge br1 on compute Nodes and add port eth1 to it.

sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1

In this setup, Virtual Machines would be created on each of the host servers(controller,compute) on a network by the name Green (10.0.0.0/24)

Setup of Openstack Plugin

Look at the setup of Openstack Plugin for L2 Non AMPP

http://rmadapur.blogspot.in/2016/04/l2-non-ampp-brocade-vdx-plugin.html

Openstack Controller Configurations (L2 Non AMPP Setup)

Refer to Configuration setup for [ml2] described in L2 Non AMPP

http://rmadapur.blogspot.in/2016/04/l2-non-ampp-brocade-vdx-plugin.html

Additional configurations that needs to be done to setup mtu and native vlans.

Following additional configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.

If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2]
segment_mtu = 2000
physical_network_mtus = physnet1:2000

[topology]
#connections=<host-name> : <physical network name>: <PORT-SPEED> <NOS PORT>
connections = controller:physnet1:Te:135/0/10, compute:physnet1:Te:136/0/10
mtu = Te:135/0/10:2000,Te:136/0/10:2000
native_vlans = Te:135/0/10:20,Te:136/0/10:20

[topology] - entries

  • Here mtu is set 2000 for both interfaces connected to the servers
  • native_vlan on the interface is set to 20

Openstack CLI Comands

Create Networks

Create a GREEN Network (10.0.0.0/24) using neutron CLI’s. Note down the id of the network created which will be used during subsequent nova boot commands.

user@controller:~$ neutron net-create GREEN_NETWORK
user@controller:~$ neutron subnet-create GREEN_NETWORK 10.0.0.0/24 --name GREEN_SUBNET --gateway=10.0.0.1
user@controller:~/devstack$ neutron net-show GREEN_NETWORK
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-15T05:41:13                  |
| description               |                                      |
| id                        | 21307c5c-b7e9-4bdc-a59c-1527e02080ff |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 2000                                 |
| name                      | GREEN_NETWORK                        |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 50                                    |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | d310745c-2726-4b79-adac-39e76e8d9b29 |
| tags                      |                                      |
| tenant_id                 | 23b20c38f7f14c2a8be5073c198c5178     |
| updated_at                | 2016-04-15T05:41:13                  |
+---------------------------+--------------------------------------+

Check the availability Zones, We will launch one VM each on one of the servers.

user@controller:~$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- controller         |                                        |
| | |- nova-conductor   | enabled :-) 2016-04-11T05:10:06.000000 |
| | |- nova-scheduler   | enabled :-) 2016-04-11T05:10:07.000000 |
| | |- nova-consoleauth | enabled :-) 2016-04-11T05:10:07.000000 |
| nova                  | available                              |
| |- compute            |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:10.000000 |
| |- controller         |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:05.000000 |
+-----------------------+----------------------------------------+

Launching Virtual Machines

Boot VM1 on Server by the name “controller”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM1

Boot VM2 on Server by the name “compute”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}')
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM2

VDX

Following L2 Networking entries would be created on VDX Switches.

sw0# show running-config interface TenGigabitEthernet 135/0/10
interface TenGigabitEthernet 135/0/10
 mtu 2000
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 50
 no switchport trunk tag native-vlan
 switchport trunk native-vlan 20
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!
sw0# show running-config interface TenGigabitEthernet 136/0/10
interface TenGigabitEthernet 136/0/10
 mtu 2000
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 50
 no switchport trunk tag native-vlan
 switchport trunk native-vlan 20
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!
sw0#

Ping between Virtual Machines across Hosts

We should now be able to ping between Virtual Machines on the two host servers.

Wednesday 13 April 2016

L2 AMPP Brocade VDX Plugin

Brocade Openstack VDX Plugin (AMPP)

This describes the setup of Openstack Plugins for Brocade VDX devices for L2 Networking with AMPP
https://github.com/openstack/networking-brocade/tree/master/networking_brocade/vdx
Setup of Openstack Plugin


Fig 1. Setup of VDX Fabric with Compute Nodes

The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric.

  • eth1 on the controller Node is connected to VDX interface (e.g Te 135/0/10)
  • eth1 on the compute Node is connected to VDX interface (e.g Te 136/0/10)
  • NIC (eth1) on the servers (controller,compute ) are part of OVS bridge br1.

Note: To create bridge br1 on compute Nodes and add port eth1 to it.

sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1

In this setup, Virtual Machines would be created on each of the host servers(controller,compute) on a network by the name Green (10.0.0.0/24)

Setup of Openstack Plugin

Pre-requisites

Brocade Plugins require a specific version of ncclient (Net conf library). It can be obtained from the following github location.

git clone https://github.com/brocade/ncclient
cd ncclient
sudo python setup.py install

Install Plugin

git clone https://github.com/openstack/networking-brocade.git --branch=<stable/branch_name>
cd networking-brocade
sudo python setup.py install

Note: branch is an optional if the latest files(master branch) from the repository is required.

Upgrade the Database

Upgrade the database so that Brocade specific table entries are created in neutron database

 neutron-db-manage  --config-file /etc/neutron/neutron.conf  
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Openstack Controller Configurations (L2 Non AMPP Setup)

Following configuration lines needs to be available in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’ to start Brocade VDX Mechanism driver (brocade_vdx_vlan).

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade_vdx_ampp
[ml2_type_vlan]
network_vlan_ranges = physnet1:2:500
[ovs]
bridge_mappings = physnet1:br1

Here,

  • mechanism driver needs to be set to ‘brocade_vdx_ampp’ along with openvswitch.
  • ‘br1’ is the openvswith bridge.
  • ‘2:500’ is the vlan range used

Following configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.

If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2_brocade]
username = admin 
password = password 
address  = 10.37.18.139
ostype   = NOS 
physical_networks = physnet1 
osversion=5.0.0
initialize_vcs = True
nretries = 5
ndelay = 10
nbackoff = 2

Here,
[ml2_brocade] - entries

  • 10.37.18.139 is the VCS Virtual IP (IP for the L2 Fabric).
  • osversion - NOS version on the L2 Fabric.
  • nretries - number of netconf to the switch will be retried in case of failure
  • ndelay - time delay in seconds between successive netconf commands in case of failure

Openstack Compute Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in one of the configuration files used by openvswitch agent.
e.g /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
bridge_mappings = physnet1:br1
network_vlan_ranges = 2:500
tenant_network_type = vlan

Here,
- ‘br1’ is the openvswitch bridge.
- ‘2:500’ is the vlan range used

VDX Configurations

Put all the interfaces connected to compute node in port-profile mode. This is a one-time configuration. (Te 135/0/10 and Te 136/0/10 in the topology above).

sw0(config)#  interface TenGigabitEthernet 135/0/10
sw0(conf-if-te-135/0/10)# port-profile-port
sw0(config)#  interface TenGigabitEthernet 136/0/10
sw0(conf-if-te-136/0/10)# port-profile-port

Openstack CLI Comands

Create Networks

Create a GREEN Network (10.0.0.0/24) using neutron CLI’s. Note down the id of the network created which will be used during subsequent nova boot commands.

user@controller:~$ neutron net-create GREEN_NETWORK
user@controller:~$ neutron subnet-create GREEN_NETWORK 10.0.0.0/24 --name GREEN_SUBNET --gateway=10.0.0.1
user@controller:~$ neutron net-show GREEN_NETWORK
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-12T09:38:45                  |
| description               |                                      |
| id                        | d5c94db7-9040-481c-b33c-252618fb71f8 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | GREEN_NETWORK                        |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 12                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 1217d77d-2638-4c5c-9777-f5cd4f4e5045 |
| tags                      |                                      |
| tenant_id                 | ed2196b380214e6ebcecc7d70e01eba4     |
| updated_at                | 2016-04-12T09:38:45                  |
+---------------------------+--------------------------------------+

Check the availability Zones, We will launch one VM each on one of the servers.

user@controller:~$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- controller         |                                        |
| | |- nova-conductor   | enabled :-) 2016-04-11T05:10:06.000000 |
| | |- nova-scheduler   | enabled :-) 2016-04-11T05:10:07.000000 |
| | |- nova-consoleauth | enabled :-) 2016-04-11T05:10:07.000000 |
| nova                  | available                              |
| |- compute            |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:10.000000 |
| |- controller         |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:05.000000 |
+-----------------------+----------------------------------------+

Launching Virtual Machines

Boot VM1 on Server by the name “controller”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM1

Boot VM2 on Server by the name “compute”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}')
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM2

VDX

Following L2 Networking entries would be created on VDX Switches.


sw0(conf-if-te-136/0/10)# do show port-profile status
Port-Profile              PPID   Activated        Associated MAC  Interface
UpgradedVlanProfile       1      No               None            None                                                                                                
openstack-profile-12      2      Yes              fa16.3ecb.2fab   Te 135/0/10
                                                  fa16.3ee4.b736   Te 136/0/10                                                               

Ping between Virtual Machines across Hosts

We should now be able to ping between Virtual Machines on the two host servers.

Tuesday 12 April 2016

L2-NON AMPP Brocade VDX Plugin

Brocade Openstack VDX Plugin (Non AMPP)

This describes the setup of Openstack Plugins for Brocade VDX devices for L2 Networking (Non AMPP)
https://github.com/openstack/networking-brocade/tree/master/networking_brocade/vdx
Setup of Openstack Plugin


Fig 1. Setup of VDX Fabric with Compute Nodes

The figure(fig 1) shows a typical Physical deployment of Servers(Compute Nodes) connected to VDX L2 Fabric. -

  • eth1 on the controller Node is connected to VDX interface (e.g Te 135/0/10)
  • eth1 on the compute Node is connected to VDX interface (e.g Te 136/0/10)
  • NIC (eth1) on the servers (controller,compute ) are part of OVS bridge br1.

Note: To create bridge br1 on compute Nodes and add port eth1 to it.
sudo ovs-vsctl add-br br1
sudo ovs-vsctl add-port br1 eth1

In this setup, Virtual Machines would be created on each of the host servers(controller,compute) on a network by the name Green (10.0.0.0/24)

Setup of Openstack Plugin

Pre-requisites

Brocade Plugins require a specific version of ncclient (Net conf library). It can be obtained from the following github location.

git clone https://github.com/brocade/ncclient
cd ncclient
sudo python setup.py install

Install Plugin

git clone https://github.com/openstack/networking-brocade.git --branch=<stable/branch_name>
cd networking-brocade
sudo python setup.py install

Note: branch is an optional if the latest files(master branch) from the repository is required.

Upgrade the Database

Upgrade the database so that Brocade specific table entries are created in neutron database

 neutron-db-manage  --config-file /etc/neutron/neutron.conf  
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Openstack Controller Configurations (L2 Non AMPP Setup)

Following configuration lines needs to be available in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’ to start Brocade VDX Mechanism driver (brocade_vdx_vlan).

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade_vdx_vlan
[ml2_type_vlan]
network_vlan_ranges = physnet1:2:500
[ovs]
bridge_mappings = physnet1:br1

Here,

  • mechanism driver needs to be set to ‘brocade_vdx_vlan’ along with openvswitch.
  • ‘br1’ is the openvswitch bridge.
  • ‘2:500’ is the vlan range used

Following configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.
If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

[ml2_brocade]
username = admin 
password = password 
address  = 10.37.18.139
ostype   = NOS 
physical_networks = physnet1 
osversion=5.0.0
initialize_vcs = True
nretries = 5
ndelay = 10
nbackoff = 2

[topology]
#connections=<host-name> : <physical network name>: <PORT-SPEED> <NOS PORT>
connections = controller:physnet1:Te:135/0/10, compute:physnet1:Te:136/0/10

Here,
[ml2_brocade] - entries

  • 10.37.18.139 is the VCS Virtual IP (IP for the L2 Fabric).
  • osversion - NOS version on the L2 Fabric.
  • nretries - number of netconf to the switch will be retried in case of failure
  • ndelay - time delay in seconds between successive netconf commands in case of failure
    [topology] - entries
  • Here physical connectivity between NIC, PhysNet (Host side) and Switch Interfaces are provided

Openstack Compute Configurations (L2 Non AMPP Setup)

Following configuration lines needs to be available in one of the configuration files used by openvswitch agent.
e.g /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
bridge_mappings = physnet1:br1
network_vlan_ranges = 2:500
tenant_network_type = vlan

Here,

  • ‘br1’ is the openvswith bridge.
  • ‘2:500’ is the vlan range used

Openstack CLI Comands

Create Networks

Create a GREEN Network (10.0.0.0/24) using neutron CLI’s. Note down the id of the network created which will be used during subsequent nova boot commands.

user@controller:~$ neutron net-create GREEN_NETWORK
user@controller:~$ neutron subnet-create GREEN_NETWORK 10.0.0.0/24 --name GREEN_SUBNET --gateway=10.0.0.1
user@controller:~$ neutron net-show GREEN_NETWORK
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-04-12T09:38:45                  |
| description               |                                      |
| id                        | d5c94db7-9040-481c-b33c-252618fb71f8 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | GREEN_NETWORK                        |
| port_security_enabled     | True                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 12                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 1217d77d-2638-4c5c-9777-f5cd4f4e5045 |
| tags                      |                                      |
| tenant_id                 | ed2196b380214e6ebcecc7d70e01eba4     |
| updated_at                | 2016-04-12T09:38:45                  |
+---------------------------+--------------------------------------+

Check the availability Zones, We will launch one VM each on one of the servers.

user@controller:~$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- controller         |                                        |
| | |- nova-conductor   | enabled :-) 2016-04-11T05:10:06.000000 |
| | |- nova-scheduler   | enabled :-) 2016-04-11T05:10:07.000000 |
| | |- nova-consoleauth | enabled :-) 2016-04-11T05:10:07.000000 |
| nova                  | available                              |
| |- compute            |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:10.000000 |
| |- controller         |                                        |
| | |- nova-compute     | enabled :-) 2016-04-11T05:10:05.000000 |
+-----------------------+----------------------------------------+

Launching Virtual Machines

Boot VM1 on Server by the name “controller”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}') 
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:controller VM1

Boot VM2 on Server by the name “compute”

user@controller:~$nova boot --nic net-id=$(neutron net-list | awk '/GREEN_NETWORK/ {print $2}')
 --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --availability-zone nova:compute VM2

VDX

Following L2 Networking entries would be created on VDX Switches.

sw0# show running-config interface TenGigabitEthernet 135/0/10
interface TenGigabitEthernet 135/0/10
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 12
 switchport trunk tag native-vlan
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!
sw0# show running-config interface TenGigabitEthernet 136/0/10
interface TenGigabitEthernet 136/0/10
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 12
 switchport trunk tag native-vlan
 spanning-tree shutdown
 fabric isl enable
 fabric trunk enable
 no shutdown
!

Ping between Virtual Machines across Hosts

We should now be able to ping between Virtual Machines on the two host servers.