Sunday, February 12, 2017

Setting up Openstack on Ubuntu

I have been using Openstack for the past a couple of years, starting from Ice House, Juno, till Liberty and Mitaka. I have enough rights in the Horizon dashboard but somehow, i want to try the installation of Openstack myself onto my own VM.

Here as a reference I am taking some notes of what went into the effort.


1. Download Ubuntu 14.04 from:
    http://old-releases.ubuntu.com/releases/14.04.0/ubuntu-14.04-desktop-amd64.iso
    My VM's host machine is Windows 10, even though it is an Intel Inside, it is OK to use Ubuntu's AMD64 iso.
2. Setting up Ubuntu is pretty straight forward except, unlike Centos, Ubuntu is using Debian flavor of the package management tools: apt-get (while Centos uses rpm and yum). Assure the network setting in Virtualbox uses Bridged Adaptor and has Promiscuous Mode enabled, as below:

3. As usual, always perform update to the OS image that we just loaded.
    >sudo apt-get update
    >sudo apt-get upgrade
    (if there is any connectivity issue related to the network, do following:
    >sudo ifconfig eth0 down
    >sudo ifconfig eth0 up  )
    (if above also failed, add following to /etc/network/interface
  #The loopback network interface
   auto lo
   iface lo inet loopback
  #The primary network interface
   auto eth0
   iface eth0 inet static
   address 192.168.1.2   #<--use ifconfig to determine this
   netmask 255.255.255.0
   gateway 192.168.1.1
   dns-nameservers 192.168.1.1 8.8.8,8  #<-8.8.8.8 is google's DNS server)
4. Create new user stack, which is going to own openstack
   >sudo adduser stack
   >sudo passwd root
5. Change to be root user:  >su -
6. Grant the stack user with sudo privilege:
  >echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
7. change to user stack:
   >su - stack
8. Install git:
   >sudo apt-get install git
9. Get devstack from git hub
  >git clone https://git.openstack.org/openstack-dev/devstack

10. Change to devstack diretory: >cd devstack
11. Edit local.conf file to specify passwords used in the devstack installation with following contents: >vi local.conf
   [[local|localrc]]
   ADMIN_PASSWORD=openstack
   DATABASE_PASSWORD=openstack
   RABBIT_PASSWORD=openstack
   SERVICE_PASSWORD=openstack
12. Start the installation: ./stack.sh
13. After fair long time of installation below is the final results and my devstack installation:
    http://192.168.1.2/dashboard
    http://192.168.1.2/identity
    default users: admin and demo
    password: openstack


14. Plug in dashboard url and log in and start exploring...

Setting up Centos7 with Virtualbox

As more and more companies embrace the "Cloud", where I work was no exception. In recent 2+ years, I have been in PaaS team setting up platform for various application development teams.  In a nut shell, the work is about automation! Provisioning VMs, configuring VM with platform level components and services. Many times in the earlier stage, I felt like it would be very helpful to help junior team members to study and understand the corner stone - Virtual Machine - of the cloud paradigm. JVM, the most common virtual machine that we work with almost every single day. But, once being asked if the person has provisioned a Linux machine, the answer was normally no. In this blog, I am trying to capture the steps of setting up a Linux VM on windows.

Before we start, a few jargon or concepts to clarify:

  • Guest Machine: the virtual machine you get. 
  • Hypervisor: a piece of software that is responsible to create and manage virtual machines. Specifically, from infrastructure's angle, the resources to be managed are compute (CPU and memory), storage and network. In our example, Virtualbox is the Hypervisor.
  • Host Machine: the machine where virtual machines are created and run

In this example, the goal is to provision a Linux machine running in Windows. We will select Oracle's Virtualbox as hypervisor to create and manage our Linux VMs.

1a. Download "Centos DVD ISO" image from https://www.centos.org/download/ .
1b. Download Virtualbox for Windows Hosts from https://www.virtualbox.org/ For this exercise I am using version 5.1.10
2. Install Virtualbox on the windows machine. I am using windows 10.
3. Launch Virtualbox. You are presented with a console from which you can manage existing VMs or create a new VM. Click New button to create a new VM:

4. Give a name and select Redhat Linux (Centos in the list, but it is the community version fo Redhat Linux, so selecting Redhat is perfectly OK)

5. Memory allocation: depends on your host machine, select 6G as the minimum RAM setting

6. Storage allocation: for basic setup with Centos OS image, allocate 30G of host machine's hard disk.....



7. Upon hitting Create button, a new VM entry is created in the console list.

Let's continue setting up the new VM. Virtualbox provides you with many customization options. We will perform some basic ones below.

8. System: since there is no floppy dive here, disable boot from Floppy.

9. Display: give 32Mb video memory

10. Storage: Select the Centos7 image file we just downloaded for Optical Drive

11. Network: for demo and simple use cases, attach the network adapter to NAT and set the adapter type to be: Paravirtualized Network (virtio-net)

There are many options to config a VM's network setup. Refer to Virtualbox's official guidelines (https://www.virtualbox.org/manual/ch06.html).  I will explore in another post.
12. Done. Click on Start to set up the newly created VM.

Install Centos and Launch the VM
13. Complete the general setups (you may want to select the Gnu Desktop as well).

14. Create a root password and a new user which you can log on after the VM is ready.

15. Reboot and your VM is ready.

Update operating system with latest Centos updates
16. log in as the user you just created and then change to root: su -
17. yum -y update
      (if you see an error message like: Cannot find a valid baseurl for repo: base/7/x86_64
try to ping a web site, if unsuccessful, issue command dhclient should solve it. If not, as root, edit this file: 
>vi /etc/sysconfig/network-scripts/ifcfg-eth0
assure NM_CONTROLLED=no save it and issue
>ifdown eth0

>ifup eth0
 

It should take care of the issue.)
18. Install guest additions utilities packages: yum install kernel-devel
19. Install gcc compiler: yum install gcc* compiler
20. Now the new Centos 7 VM is ready to use.

Saturday, May 31, 2014

Solr and Tomcat on Windows

It has been a long while since I touched Lucene. Lately, I have a chance to look into the latest coming out of Apache Lucene project - Solr. It is pretty amazing. Instead using the lucene libraries directly, we could have a nice web interface to leverage and with the RESTful API, it takes virtually no time to get a document indexed and integrate the search capability into an application.

The Tutorial from Sor site is very straightforward and took me less than 10 minutes to go thru it and got me started. To make it be used in a production environment, It needs to be installed in a Tomcat server. I will write a few posts to capture what i learned on using it. First thing first, here are the complete steps to set Solr on Tomcat:

Download needed packages: I am using Solr 4.81 and Tomcat 7.035 and Java JDK 1.7.0_25 on Windows 7

  1. Unzip solr package into C:\Software\solr-4.8 
  2. Deploy Solr application to Tomcat
    Copy C:\Software\solr-4.8.1\dist\solr-4.8.1.war
    to C:\Software\apache-tomcat-7.0.53\webappsRename solr-4.8.1.war to solr.war
  3. Add additional libraries to satisfy logging needs
    Copy all jar files from C:\Software\solr-4.8.1\dist\solrj-lib
    to C:\Software\apache-tomcat-7.0.53\lib.
    Failed to do will yield logging related 'Class Not Found Error"
  4. Setup solr home
    Copy C:\Software\solr-4.8.1\example\solr directory
    to a place you want to use as solr home, e.g. C:\Software\UserData\solrCollections
    note: 1. this directory contains two folders (bin and collection1) and a few other files
             2. this is also the directory that we setup for multi-core (see below)
  5. Make Tomcat know solr home
    Modify catalina.bat file found in C:\Software\apache-tomcat-7.0.53\bin to add following to refer to the solr home:
    set CATALINA_OPTS=-Dsolr.solr.home=C:/Software/UserData/solrCollections
  6. Set up logging
    a). Copy log4j.properties from C:\Software\solr-4.8.1\example\resources
    to a directory on classpath. I am using: C:\Software\apache-tomcat-7.0.53\webapps\solr\WEB-INF\classes
    b). Setup log folder by making solr.log=../logs/ in log4j.properties file. The default value (solr.log=logs/) will create logs directory inside tomcat\bin folder. I don't want that and in this example, the solr.log can be found at: C:\Software\apache-tomcat-7.0.53\logs
    note: add set CATALINA_OPTS=%CATALINA_OPTS% -Dlog4j.debug to catalina.bat to assure the log4j.properties files is found from the classpath.
  7. Start Tomcat by running catalina.bat in C:\Software\apache-tomcat-7.0.53\bin
  8. Launch solr from http://localhost:8080/solr and following screen should show solr admin console:
  9. Since I already did the tutorial, the indexed data already exists in the /collection1/data, i can continue to use it to verify my setup.

    This completes the simple set up of solr with Tomcat. Following additional steps are to set up muti-core. Multi-core (From Solr wiki: Multiple cores let you have a single Solr instance with separate configurations and indexes, with their own config and schema for very different applications, but still have the convenience of unified administration. Individual indexes are still fairly isolated, but you can manage them as a single application, create new indexes on the fly by spinning up new SolrCores, and even make one SolrCore replace another SolrCore without ever restarting your Servlet Container.). Depends on how we want to use multi-core, there are many options, also while below can be also achieved via solr command line, here I am using the admin console to do it just to capture the basics.

  10. Create a new core - core2:
    Create C:\Software\UserData\solrCollections\core2 by replicating the collection1.
    Empty the core2\data directory.
    Delete the core.properties
    note: Failed to do so will yield an error on unable to find solrconfig.xml file.
  11. In Solr admin console, create a new core with the name core2.
     
  12. Refer to solrConfig.xml to make necessary changes to reflect your specific needs.

DataImportHandler

I had a need to retrieve data from a database and then index it. For that, DataImportHandler is the way to go and here I noted the process down for future reference.

First and foremost, this involved three configuration files.

  • solrconfig.xml
  • data-config.xml
  • schema.xml

Step 1: Add data-config.xml in solrconfig.xml.

<requestHandler name="/dataimport"  class="org.apache.solr.handler.dataimport.DataImportHandler">
    <lst name="defaults">
        <str name="config">data-config.xml</str>
    </lst>
</requestHandler>

Step 2: Add queriess in data-config.xml

<dataConfig>
    <database name="myTestDB" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://127.0.0.1/mydb user="root"/>
    <document>
        <entity name="resource"
                transformer="com.mysolr.plaground.MyBlobTransfomer"
                query = "select id,name,base_path, target_url,content, author from solr_test">
            <field name="id" column="id" />
            <field name="basepath" column="base_path" />
            <field name="proxyname" column="name" />
            <field name="blobcontent" column="content" blob="true" srcColumn="content" />
                <field name="certified" column="certified" blob="true" element="lifecycle" srcColumn="content" />
                <field name="contacts" column="entry" blob="true" element="contacts" srcColumn="content" />
                <field name="platform" column="platform" blob="true" element="techstack" srcColumn="content" />               
                <field name="contributinggroup" column="contributinggroup" element="general" blob="true" srcColumn="content"/>               
            <field name="creator" column="author" />
        </entity>
    </document>
</dataConfig>

Step 3: Add indexing fields to schema.xml. 

Take all defined fields from above (the value of the "name" attribute). Make sure the field type (i.e. text_general) is previously defined in the schema.xml. If not, replace text_general with a pre-defined type, i.e. text or some other name.
<field name="basepath" type="text_general" index="true" stored="true" />
<field name="proxyname" type="text_general" index="true" stored="true" multiValued="true" />
<field name="creator" type="text_general" index="true" stored="true" multiValued="true" />
<field name="certified" type="text_general" index="true" stored="true" multiValued="true" />
<field name="contacts" type="text_general" index="true" stored="true" multiValued="true" />
<field name="platform" type="text_general" index="true" stored="true" multiValued="true" />
<field name="contributinggroup" type="text_general" index="true" stored="true" multiValued="true" />

Step 4: Run http://localhost:8080/solr/dataimport?command=full-import to add index

In Action

The first DataImport I needed to do was to import and index a blob field from an Oracle database using BlobTransformer.With the reference from Lucidworks, I was able to cast the return object with oracle BLOB class (oracle.sql.BLOB).
import oracle.sql.BLOB;
import org.apache.solr.handler.dataimport.Context;

import org.apache.solr.handler.dataimport.Transformer;
...
public class BlobTransformer extends Transformer {
    private static Log LOGGER = LogFactory.getLog(BlobTransformer.class);

    @Override
    public Object transformRow(Map<String, Object> row, Context context) {
        List<Map<String, String>> fields = context.getAllEntityFields();
        JSONObject xmlJSONObject;
       
        for (Map<String, String> field, fields) {
            //check if this field has blob=true specified in the
            //data-config.xml
            String blob=field.get("blob");
           
            if("true".equals(blob)) {
                Object value = row.get("srcColumn"); //
                BLOB blobValue =null;
                String propertyXml = "<empty />";
               
                if(blobValue!=null) && (value instanceof BLOB){
                    blobValue = (BLOB)value;
                    try{
                        byte[] bdata = blobValue.getBytes(1,(int)blobValue.length());
                        propertyXml = new String(bdata);
                        ... ...                       
                    }catch(){
                        ...
                    }
                }
            }
        }
       
Things worked out pretty good until recently, I encountered a similar use case but the backend database is MySQL. According to MySQL document, the Blob datatype can be casted into java.sql.Blob. It might be true if I make the query call from within my Java code. It didnt fit my use case. I have a MySQL table solr_test, which has a blob field content. Some XML content is stored in it. In MySQL workbench, this field has BLOB showing as its content. At command line, I could view its content in its original native XML format.

After necessary changes in my data-config file and schema.xml as well as my new version of BlobTransformer, I hoped the transformer would work as previous version. Not the case! The first error I got was the value is not an instance of BLOB so none of the logic got executed in that if block. After looking it up, the value has [B type (a byte[]). To moving forward, I changed the condition to be if(blobValue!=null) && (value.getClass().getName().equals("[B") {...

I encountered the seconded issue after the above change: [B can not be cast into java.sql.Blob. Although the value object is of byte array type, given its real object type, I couldn't simply make a String out it. I had to turn byte array object into a real byte[]. Following serialization process was taken (there maybe a simpler way to use common.lang package to serialize it, but I wanted to do it the hard way!).
try{
    ByteArrayOutputSream out = new ByteArrayOutputStream();
    ObjectoutputStream oos = new ObjectOutputStream(out);
    oos.writeObject(value);
    ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(out.toByteArray())); // line needed
    byte[] bdata = (byte[])ois.readObject();  //line needed
    propertyXml = new String(bdata);
    ... ...
}catch(IOException ioe){
    ...
}
N.B.  If I replace the two "line needed" with byte[] bdata = out.toByteArray() as quite some posts on internet suggested, I would end up with the orginal xml content with some unreadable characters or symbols added to the beginning.

web.xml and Servlet Version

web.xml is the web application deployment descriptor. It is used when a web application is deployed to a servlet container (such as Tomcat) at deployment time. The bare bone of this file looks like so (taken from my Spring MVC example): 

    <?xml version="1.0" encoding="UTF-8"?>
    <web-app id="webApp_ID" version="2.5"
      xmlns:xsi="http://wwww.w3.org/2001/XMLSchema-instance" 
      xmlns="http://java.sun.com/xml/ns/javaee"
      xsi:schemaLocation="
            http://java.sun.com/xml/ns/javaee
            http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
       
        <display-name>Book Club</display-name>
       
        <context-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>
                /WEB-INF/xysample-servlet.xml
            </param-value>
        </context-param>

        <listener>
         <listener-class>
          org.springframework.web.context.ContextLoaderListener
         </listener-class>
        </listener>
       
        <servlet>
          <servlet-name>xysample</servlet-name>
            <servlet-class>
              org.springframework.web.servlet.DispatcherServlet
            </servlet-class>
          <load-on-startup>1</load-on-startup>
        </servlet>

        <servlet-mapping>
            <servlet-name>xysample</servlet-name>
            <url-pattern>/</url-pattern>
        </servlet-mapping>
    </web-app>


In a Eclipse Maven project, since we create web.xml manually, the item worth noting is the version number. It refers to the Servlet API version. This number should be corresponding to the Tomcat version, the Java environment that the app is running in. Following chart, taken from Tomcat site shows the compatibility:


Servlet Spec
JSP Spec
EL Spec
WebSocket Spec
Apache Tomcat version
Actual release revision
Support Java Versions
3.1
2.3
3.0
1.0
8.0.x
8.0.3 (beta)
7 and later
3.0
2.2
2.2
1.0
7.0.x
7.0.52
6 and later
(WebSocket 1.0 requires 7 or later)
2.5
2.1
2.1
N/A
6.0.x
6.0.39
5 and later
2.4
2.0
N/A
N/A
5.5.x (archived)
5.5.36 (archived)
1.4 and later
2.3
1.2
N/A
N/A
4.1.x (archived)
4.1.40 (archived)
1.3 and later
2.2
1.1
N/A
N/A
3.3.x (archived)
3.3.2 (archived)
1.1 and later

Also, by looking at the servlet-api.jar file under tomcat\lib, we could also verify that version 2.5 is indeed what we should use: 





N.B. Failure of specifying this number will not get you the features provided by the underline servlet api.

N.B. If I had elected to create a "Dynamic Web Application" from Eclipse, I would have selected the version from the ‘Dynamic web module version‘ dropdown list.