Wednesday, 22 December 2010

OpenKinect and Qt reference Design

UPDATE (you can now get a new version of the code here and I'm in the process of updating this to work with the new drivers)

After seeing all the cool projects here I decided to buy a Kinect and have a play. Most of the code is really simple and the library itself includes a couple of sample apps. I  decided as Qt was my main development environment to wrap the whole library into a Qt ready class so I could use the signals and slots mechanism as well as Qt's Threading capabilities.

Getting Started

First thing to do is download and install the libfreenect source using git from here this installs to /usr/local so is ready to be linked in. To test try out the sample programs. If you are using a mac (as I am) you will need to patch the libusb source but full instructions are on the site.

Integrating into Qt
When using Qt objects it is possible to use the signals and slots mechanism to pass messages to each of the objects, this is useful as we can connect GUI components to a class and use the event mechanism to for example change the angle of the Kinect.

To do this we need to inherit from the base QObject class and extend our class with the signals and slots required. Luckily the slots are also normal methods so we can use our class in two ways.

As we are wrapping into an existing C library there are a number of things to consider. Firstly the libfreenect library uses callbacks to grab the buffer data, we really need to make this work in a separate thread so that it can work at it's own pace, we also need to be able to grab the frame data when it's not being accessed by the kinect library, therefore we need to add a Mutex to lock our data whilst the thread is using it.

As Qt has it's own built in Thread, Mutex and MutexLocker classes I decided to use these to make life easier.

Finally we only need one instance of the class, so that we can load the callbacks once then grab the data, the easiest way of doing this is to use the singleton pattern and each class that needs access to the QKinect object can grab the instance required.

The basic class outline for the QKinect object is as follows
Object Construction
As mentioned above the class is based on the singleton pattern, however I ran into a problem with my initial design, and it's worth sharing that problem here.

Usually In the singleton pattern we have an instance method which will call the constructor of the class and initialise elements of that class. When I did this I was getting issues and with some debug statements I found out that the ctor was being called twice but I was getting a null instance each time. Finally I realised this was due to the fact that the constructor was creating two threaded callbacks which called the instance method, however the instance pointer had not yet been created. To overcome this problem the following instance method was created.

QKinect* QKinect::instance()
{
 // this is the main singleton code first check to see if we exist
 if (s_instance==0 )
 {
  // we do so create an instance (this will validate the pointer so other
  // methods called in the init function will have a valid pointer to use)
  s_instance = new QKinect;
  // now setup the actual class (with a valid pointer)
  /// \note this could be made nicer to make it fully thread safe
  s_instance->init();
 }
 // otherwise return the existing pointer
  return s_instance;
}
The constructor does nothing but return an instance of the class to the s_instance pointer. After this we call the init method on this class which will initialise the device and start the threading.

void QKinect::init()
{
 // first see if we can init the kinect
 if (freenect_init(&m_ctx, NULL) < 0)
 {
  qDebug()<<"freenect_init() failed\n";
  exit(EXIT_FAILURE);
 }
 /// set loggin level make this programmable at some stage
 freenect_set_log_level(m_ctx, FREENECT_LOG_DEBUG);
 /// see how many devices we have
 int nr_devices = freenect_num_devices (m_ctx);
 /// now allocate the buffers so we can fill them
 m_userDeviceNumber = 0;
 m_bufferDepth.resize(FREENECT_VIDEO_RGB_SIZE);
 m_bufferVideo.resize(FREENECT_VIDEO_RGB_SIZE);
 m_bufferDepthRaw.resize(FREENECT_FRAME_PIX);
 m_bufferDepthRaw16.resize(FREENECT_FRAME_PIX);
 m_gamma.resize(2048);
 /// open the device at present hard coded to device 0 as I only
 /// have 1 kinect
 if (freenect_open_device(m_ctx, &m_dev, m_userDeviceNumber) < 0)
 {
  qDebug()<<"Could not open device\n";
  exit(EXIT_FAILURE);
 }


 /// build the gamma table used for the depth to rgb conversion
 /// taken from the demo programs
 for (int i=0; i<2048; ++i)
 {
  float v = i/2048.0;
  v = std::pow(v, 3)* 6;
  m_gamma[i] = v*6*256;
 }
 /// init our flags
 m_newRgbFrame=false;
 m_newDepthFrame=false;
 m_deviceActive=true;
 // set our video formats to RGB by default
 /// \todo make this more flexible at some stage
 freenect_set_video_format(m_dev, FREENECT_VIDEO_RGB);
 freenect_set_depth_format(m_dev, FREENECT_DEPTH_11BIT);
 /// hook in the callbacks
 freenect_set_depth_callback(m_dev, depthCallback);
 freenect_set_video_callback(m_dev, videoCallback);
 // start the video and depth sub systems
 startVideo();
 startDepth();
 // set the thread to be active and start
 m_process = new QKinectProcessEvents(m_ctx);
 m_process->setActive();
 m_process->start();
}

Most if this code is following some of the examples from the freenect library but using the class attributes to store the data. We also create our main processing thread to loop and process the Kinect events
QThread
The QThread object is a class which we inherit from to allow crosss platform threading, for this class we must implement a run() method which is called when the QThread start method is called. The class is as follows
class QKinectProcessEvents : public QThread
{
public :
 /// @brief ctor where we pass in the context of the kinect
 /// @param [in] _ctx the context of the current kinect device
 inline QKinectProcessEvents(
               freenect_context *_ctx
               )
               {m_ctx=_ctx;}
 /// @brief sets the thread active this will loop the run thread
 /// with a while(m_active) setting this will end the thread loop
 inline void setActive(){m_active=true;}
 /// @brief sets the thread active must call QThread::start again to make this
 /// work if the thread has been de-activated
 inline void setInActive(){m_active=false;}
protected :
 /// @brief the actual thread main loop, this is not callable and the
 /// QThread::start method of QThread must be called to activate the loop
 void run();

private :
 /// @brief a flag to indicate if the loop is to be active
 /// set true in the ctor
 bool m_active;
 /// @brief the context of the kinect device, this must
 /// be set before the thread is run with QThread::start
 freenect_context *m_ctx;
};
The run method itself is quite simple, it will loop whilst the m_active flag is true (set in the init method above)
void QKinectProcessEvents::run()
{
 // loop while we are active and process the kinect event queue
 while(m_active)
 {
  //qDebug()<<"process thread\n";
  if(freenect_process_events(m_ctx) < 0)
  {
   throw std::runtime_error("Cannot process freenect events");
  }
 }
}

How libfreenect works
The freenect library requires callbacks to connect to the device which are passed the active device as well as a void** pointer into which the data will be loaded. As it is a C library it requires static methods to be passed and the signature of the methods must conform with the callback methods from the library. The following code shows how these are implemented.
static inline void depthCallback(
                                 freenect_device *_dev, 
                                 void *_depth,
                                 uint32_t _timestamp=0
                                )
 {
  /// get an instance of our device
  QKinect *kinect=QKinect::instance();
  /// then call the grab method to fill the depth buffer and return it
  kinect->grabDepth(_depth,_timestamp);
 }
 
 static inline void videoCallback(
                                         freenect_device *_dev,
                                         void *_video,
                                         uint32_t _timestamp=0
                                        )
 {
  /// get an instance of our device
  QKinect *kinect=QKinect::instance();
  /// then fill the video buffer
  kinect->grabVideo(_video, _timestamp);
 }
These methods grab an instance of the class so we can access the class methods, then we call the grab methods to actually fill in the buffers. The simplest of these is the grabVideo method as shown below
void QKinect::grabVideo(
                         void *_video,
                         uint32_t _timestamp
                       )
{
 // lock our mutex and copy the data from the video buffer
 QMutexLocker locker( &m_mutex );
 uint8_t* rgb = static_cast(_video);
 std::copy(rgb, rgb+FREENECT_VIDEO_RGB_SIZE, m_bufferVideo.begin());
 m_newRgbFrame = true;
}
This method is made thread safe by using the QMutexLocker, this object is passed a mutex which it locks, when the object falls out of scope it automatically unlocks the object when it is destroyed. This makes life a lot easier as we don't have to write an unlock for each exit point of the method. The data is passed to the *_video pointer from libfreenect and we then just copy it to our buffer ready for the other subsystems to use.

The depth callback is more complex as I grab the data in different formats, the easiest two are the method that just fill the buffers with the raw 16 bit data and also the raw data compressed as an 8 bit (this is not quite correct and is likely to go soon). There is also code from the sample apps to convert this into an RGB gamma corrected data buffer, this gives red objects closest to the camera then green then blue for distance. This is useful for visualising the depth data.
void QKinect::grabDepth(
                         void *_depth,
                         uint32_t _timestamp
                       )
{
// this method fills all the different depth buffers at once
// modifed from the sample code glview and cppview.cpp
/// lock our mutex
QMutexLocker locker( &m_mutex );
// cast the void pointer to the unint16_t the data is actually in
uint16_t* depth = static_cast(_depth);

// now loop and fill data buffers
for( unsigned int i = 0 ; i < FREENECT_FRAME_PIX ; ++i)
{
 // first our two raw buffers the first will lose precision and may well
 // be removed in the next iterations
 m_bufferDepthRaw[i]=depth[i];
 m_bufferDepthRaw16[i]=depth[i];
 // now get the index into the gamma table
 int pval = m_gamma[depth[i]];
 // get the lower bit
 int lb = pval & 0xff;
 // shift right by 8 and determine which colour value to fill the
 // array with based on the position
 switch (pval>>8)
 {
 case 0:
  m_bufferDepth[3*i+0] = 255;
  m_bufferDepth[3*i+1] = 255-lb;
  m_bufferDepth[3*i+2] = 255-lb;
  break;
 case 1:
  m_bufferDepth[3*i+0] = 255;
  m_bufferDepth[3*i+1] = lb;
  m_bufferDepth[3*i+2] = 0;
  break;
 case 2:
  m_bufferDepth[3*i+0] = 255-lb;
  m_bufferDepth[3*i+1] = 255;
  m_bufferDepth[3*i+2] = 0;
  break;
 case 3:
  m_bufferDepth[3*i+0] = 0;
  m_bufferDepth[3*i+1] = 255;
  m_bufferDepth[3*i+2] = lb;
  break;
 case 4:
  m_bufferDepth[3*i+0] = 0;
  m_bufferDepth[3*i+1] = 255-lb;
  m_bufferDepth[3*i+2] = 255;
  break;
 case 5:
  m_bufferDepth[3*i+0] = 0;
  m_bufferDepth[3*i+1] = 0;
  m_bufferDepth[3*i+2] = 255-lb;
  break;
 default:
  m_bufferDepth[3*i+0] = 0;
  m_bufferDepth[3*i+1] = 0;
  m_bufferDepth[3*i+2] = 0;
  break;
 }
}
// flag we have a new frame
m_newDepthFrame = true;
}

Using the class
Now the class has been created it quite simple to use. First we need to create an instance of the class as follows
QKinect *m_kinect;

m_kinect=QKinect::instance();

// now connect a QSpinBox to the angle

QDoubleSpinBox *angle = new QDoubleSpinBox(this);
angle->setMaximum(30.0);
angle->setMinimum(-30.0);
angle->setSingleStep(1.0);
QObject::connect(angle,SIGNAL(valueChanged(double)),m_kinect,SLOT(setAngle(double)));

Drawing the image buffer
The easiest way to draw the data from one of the image buffers is to use an OpenGL texture buffer and attach it to a Quad. There are many ways to do this and usually I will use retained mode OpenGL and a series of shaders to make things work faster on the GPU, however I wanted this system not to require too many external libraries (such as my ngl:: lib which I use for other examples) so I've used a simple immediate mode GL version for this example.

// first create the gl texture (called once)
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, & m_rgbTexture);
glBindTexture(GL_TEXTURE_2D, m_rgbTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);


// bind data and draw (per frame)
QKinect *kinect=QKinect::instance();
if(m_mode ==0)
{
 kinect->getRGB(m_rgb);
}
else if(m_mode == 1)
{
 kinect->getDepth(m_rgb);
}
glBindTexture(GL_TEXTURE_2D, m_rgbTexture);
glTexImage2D(GL_TEXTURE_2D, 0, 3, 640, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, &m_rgb[0]);
glLoadIdentity();

glEnable(GL_TEXTURE_2D);

glBegin(GL_TRIANGLE_FAN);
 glColor4f(255.0f, 255.0f, 255.0f, 255.0f);
 glTexCoord2f(0, 0); glVertex3f(0,0,0);
 glTexCoord2f(1, 0); glVertex3f(640,0,0);
 glTexCoord2f(1, 1); glVertex3f(640,480,0);
 glTexCoord2f(0, 1); glVertex3f(0,480,0);
glEnd();
The following movie show the reference implementation in action
video

You can grab the source code from my website here t

Thursday, 9 December 2010

Render Farm Design

I'm in the process of re-designing the main render farm infrastructure for the NCCA, so thought I would post the initial design considerations as part of the ongoing post of design examples for the students.

Outline
At it simplest level a render farm is a method of distributing the rendering tasks amongst a series of processors where each of the processors will process a job (usually the rendering of a single frame). There are many commercial solutions to this but each have different advantages and disadvantages, the discussion of these is really outside the realms of this post but the decision was made to write our own flexible solution rather than a "out the box" solution has been made.

The original farm is described here the new version will extend this basic idea and add new features as well as being more extensible to meet the need of different types of rendering and simulation.

Basic System Outline
The basic system is a homogeneous collection of networked machines each of which has the relevant software installed to render.  To this a series of transparent network attached storage is available.  As this is a unix based system we don't need to worry about drive letters etc just that a fully qualified path is visible to the render software and it can be accessed.

The basic system looks like this

The basic process involves exporting a single renderable file per frame. This is easy for renderman and Houdini as we can generate rib and ifd files for prman and mantra respectively. However for maya there are problems as maya batch render works on a single maya file and causes problems with file access latency. This can be solved by exporting a single mental ray file per frame but we don't at present have stand alone mental ray. So at present the solution will be for prman and mantra.

Once these files are generated, they may be submitted to the farm for rendering. To allow for multiple machines to render these files we need a centralised repository for the user information as well as the location of the data etc. To do this we use a MySQL database. This is used as it provides a good open source solution for the query and collation of the data and is easy to interface with C++, Python and Qt which are our main development environments.

Submission Process
Renders need to be submitted in batches where a batch is a collection of frames to be rendered. The user may prioritise these batches and pause them. There is also options to send a mail when finished and create a small movie of the frames. 

Other options will be added at a later date for example to examine frames and stop a batch if more that 3 frames are all black or all white (usually because lights are not enabled etc).

Output from the render and any errors will also be logged so a user may investigate any errors from the render etc. 

There will be a PyQt application to do the submission and management of the users renders as well as a web front end for the diagnostics.

Each submitted file will be checked to ensure the same frame is not submitted multiple times, this is done by calculating the MD5 sum of the file and using it as a unique key in the database.  

Standard Unix username and group identifiers are use for the user identification so a user must be logged in to submit and manage frames, and thus can only manage their own jobs. Other unix tools will also be used to send mail (with email address extracted from the yp database )

Load Balancing and Scheduling
The system schedules jobs based on a number of criteria, initially the user with the least render time and least number of jobs will be selected. After this the priority of the batches are considered with the highest priority batch being selected first (with 0 being highest and 99 lowest). Within each batch jobs are also ordered based on the output frame number ( Frame.0001.rib Frame.0002.rib etc).

Further refinement of the selection can be based on groups such that year groups and individual course work deadlines may be prioritised.

The main aim of this process is the removal of an overall render wrangler role, and jobs will be selected in a fair manner, with the overall load averaging out. These values will be re-set at regular intervals to not penalise early use of the farm for test renders etc. 

Render Client
The render client on each of the worker machines will have a selection of roles, determined via a table in the database, for example the old render farm blades are only 32 bit but can still be used for compositing, so only compositing jobs will be passed to these machines.

Each desktop machine will monitor load and if a user is logged in and start rendering if the load is below certain criteria. If a user logs into a machine whilst it is rendering the job will be lowered in priority once the users tasks reach a certain CPU / Memory load. 

It will also be possible to turn batches of machine off from the farm by disabling the node from the client list. 

At present for most software we have enough command line render licenses to cope with all the machine s in the groups so license allocation will not be an issue at present but needs to be considered in the larger picture of design at some later stage.

Initial Table Designs
The following scans show my initial design sketches
The scans show the main outline of each of the tables and some of the data types. More importantly we can see the relationships between the tables as well as some areas which I have already normalised whilst not fully normalised this is enough for the speed and access to data we need. Further investigation of this will be tested once the initial system is developed.

Database Development
To develop the database the excellent MySQL Workbench has been used. This allows the visual development of the database tables and the Forward / Reverse engineering of databases. The initial tables from the design above are show in the following diagram




The workbench tool will generate SQL scripts for the creation of the tables, for example the following script produces the userInfo table
CREATE  TABLE IF NOT EXISTS `render`.`userInfo` (
  `uid` INT NOT NULL ,
  `numRenders` INT NULL ,
  `renderTimes` TIME NULL ,
  `lastRender` TIMESTAMP NULL ,
  `userName` VARCHAR(45) NULL ,
  `loginName` VARCHAR(45) NULL ,
  `course` VARCHAR(45) NULL ,
  `gid` INT NULL ,
  PRIMARY KEY (`uid`) )
ENGINE = InnoDB;

It is important to note in the above SQL that we are using the InnoDB engine as this is the only one that supports foreign keys in MySQL.

The data for this table is generated from the unix yp system using a simple python script, first we use ypcat passwd to grab the file. Which is in the following format
jmacey:x:12307:600:Jonathan Macey:/home/jmacey:/bin/bash
The first entry is the login user name, the 3 the user id (UID) the 4th the group id <GID> and the 5th the long username.

The group values can be extracted from the group file which has the following format

cgstaff:x:600:jmacey,xxx,xxx
From this we use the 3rd entry as the key into the first list to extract the text GID value. This is shown in the following python script.

import MySQLdb
# import os for dir list etc
import os, commands, getopt, sys


def usage():
 print "AddUsers : add users to the renderfarm db"
 print "(C) Jon Macey jmacey@bmth.ac.uk"
 print "_______________________________________"
 print "-h --help display this message"
 print "Usage AddUsers [group file] [userfile]"
 print "Where userfile is the output of ypcat passwd"
 print "searches for username and UID from  this file and adds it to the db"

def readGroups(_filename) :
 print "reading Group file creating dictionary"
 groupsFile=open(_filename,'r')
 # read in all the data
 data=groupsFile.readlines()
 #now read through the file and try and  find the UID and username
 groups={}
 for line in data :
  line=line.split(":")
  groups[line[2]]=line[0]
 return groups

def addUsers(filename,groups) :
 # here we create a connection to the DB
 DBADDR=os.environ.get("RENDERDB")
 if DBADDR =="" :
  print "RenderDB is not set please set to master server"
  sys.exit()
 print DBADDR
 DBConnection =  MySQLdb.connect(host=DBADDR, user="RenderAdmin", passwd="xxxxxxx",db="Render")
 # now we create a cursor to the table so we can insert an entry
 cursor = DBConnection.cursor()

 # so we open the file for reading
 ypfile=open(filename,'r')
 # read in all the data
 data=ypfile.readlines()
 #now read through the file and try and  find the UID and username
 for line in data :
  line=line.split(":")
  #                      0  1  2     3   4
  # data in the form jmacey:x:12307:600:Jonathan Macey:/home/jmacey:/bin/bash
  loginName=line[0];
  uid=int(line[2])
  gid=int(line[3])
  userName=line[4]

  query="insert into userinfo (uid,numRenders,renderTimes,lastRender,userName,loginName,course,gid) values (%d,0,\"00:00:00\",\"00:00:00\",\"%s\",\"%s\",\"%s\",%d);" %(uid,userName,loginName,groups.get(line[3]),gid)
  cursor.execute(query)
  DBConnection.commit()
  # close the DB connection
 cursor.close ()
 DBConnection.close ()
# end of QueryJobs function


class usage(Exception):
    def __init__(self, msg):
        self.msg = msg


def main(argv=None):
 if argv is None:
  argv = sys.argv
 try:
  try:
   opts, args = getopt.getopt(argv[1:], "h", ["help"])
  except getopt.error, msg:
   raise Usage(msg)
     except usage, err:
   print >>sys.stderr, err.msg
   print >>sys.stderr, "for help use --help"
   return 2
 for opt, arg in opts:
  if opt in ("-h", "--help"):
   usage()
   sys.exit()

 if len(sys.argv) != 3 :
   print "no group and password file passed"
   sys.exit()
 groups=readGroups(argv[1])
 print groups
 addUsers(argv[2],groups)
if __name__ == "__main__":
    sys.exit(main())


The first pass loads the group file and generates a python dictionary of values the first the numeric key the 2nd the text group name. The next pass reads the users file and inserts the data into the userInfo table ready for use.

The next stage of the process is to develop a submission script. This will be the subject of the next post in this area, where I will also go into more detail of the Python SQL interface and PyQt database interation.

Tuesday, 7 December 2010

Some (very) rough designs

I've been asked to give some more examples for the assignment design hand-in, the following two scans from my sketch book show some basic class designs for a scene element of NGL, I never implemented it, but gives a bit of an idea

Excuse the handwriting but it was only for my consumption really!

The next 5 scans are from an initial design for a Free From Deformation program based on the linked paper (including the deliberate mathematical error in the paper which took ages to debug and correct) this was done as Part of the MSc Computer Animation course for the CGI Techniques unit, will update and release the code at some stage.


The next block of stuff is rough designs for my programmable flocking system which eventually became my MSc project this is the main initial design sketches as well as the design of the programmable brain, I've omitted some pages of crap but most of it is here.




There you go a quick view inside the chaotic mind of me designing, I'm in the process of doing the main re-design of the render farm interfaces so will start posting some of that later this week, along with a new set of coding examples based on using MySQL







Wednesday, 1 December 2010

Maya Batch Renderer GUI Using PyQT

The code for this demo is now on github.

The maya batch renderer is a command line tool to allow the rendering of frames from within a maya file. It has many command line options which can be determined by running the command Render -h. From this output the following elements have been identified as most use for the basic batch renderer dialog.


In addition to this we can query the different renderer options and get the following list
We are going to design a user interface using Qt and Python to generate the command line arguments shown above and give the user the ability to choose the files, project directory and output directory for the program.
The program will also report the output of the batch renderer in a window and give the user the ability to stop batch render at any stage. The main UI is shown next.










Batch Render Dialog
We are going to use Qt designer to develop the application user interface, first open up designer (/opt/qtsdk/qt/bin/designer in the Linux studios) and choose a Dialog without buttons as shown
Select the dialog that’s created and set the object properties objectName to mainDialog and windowTitle to Batch Render as shown
We are now going to add a button to the window and then set the layout manager before we create the rest of the UI.
First drag a button anywhere on the screen, then change the name of the button to m_chooseFile and the button text to Choose File as shown below.
At present you are free to move any of the UI components within the form, however once the form is re-sized no of the buttons will re-size correctly. To enable this we need to add a layout manager to the form. This is done by right clicking on the dialog and in this case we are going to select the “Layout on Grid” which should now result in the following
Now as we add components to the UI blue areas will appear as slots to add to the grid, for the next stage we are going to add a “QLineEdit” component next to the button, and name it m_fileName we will also tick the read-only tickbox.
We are now going to replicate this process and add 2 more QLineEdit and Button Combinations as shown below
Note the Names of each of the components and set them to the correct names, and set the read only flag for each of the text components.

Next we are going to add a group box and set it to the following size and values
Next we add another button which will need to be spaced to fit into the correct size
First add the button and name it m_batchRender as shown 
Then add a horizontal spacer to make the button fit in the correct area (you may have to add the spacer above then move the button into place)
We are now going to add the rest of the controls into the group box, we need to first add a layout to the group box, this is done by choosing the Grid Layout  as shown here and scaling it to fit the group box
Now add the following labels and spin boxes
The spin boxes from left to right are called m_startFrame, m_endFrame, m_byFrame and m_pad.

We need to set some default values and ranges for each as shown
We are now going to add a second row to the group box first a label and a combo box which we will call m_renderer as shown
By double clicking on the combo box we can get the edit dialog and using the + button add the following text values for the different renderers.

Next we will add a text edit called m_outputFileName and a combo box called  m_extension and complete the row as shown.



For the final element we are going to add a textedit so we can capture the output of the batch render, this will be called m_outputWindow and we need to set the read only flag in the property editor.
The final window should look like the following 
Using PyQt
The UI file generated by QtDesigner is a simple XML file containing the layouts of the different elements. We can convert this into source code using one of the UI compilers, in this case we are developing a python application so we will use the pyuic4 compiler using the following command line.

pyuic4 BatchRenderUI.ui -o BatchRender.py

This will produce a python file for the UI elements which we will use within our own class to then create the program.
Basic Program Operation
The way the program will operate is to check that MAYA_LOCATION is in the current path, if it is not we need to tell the user and set this. This is so we can determine the correct location of the Render command in MAYA_LOCATION/bin. The basic python code to do this is as follows

#!/usr/bin/python
from PyQt4 import QtCore, QtGui
from BatchRenderUI import Ui_mainDialog

import os,shutil
import fileinput



if __name__ == "__main__":
 import sys
 app = QtGui.QApplication(sys.argv)

 ResourcePath=os.environ.get("MAYA_LOCATION")

 MainDialog = QtGui.QDialog()
 ui = BatchRender(ResourcePath)

#see if the ResourcePath is set and quite if not
 if ResourcePath == None :
  msgBox=QtGui.QMessageBox()
  msgBox.setText("The environment variable MAYA_LOCATION not set ")
  msgBox.show()
  sys.exit(app.exec_())

 else :
    print "ready"
  sys.exit(app.exec_())



If the environment variable is not set we will get the following dialog box
To set the location we need to add export MAYA_LOCATION=:/usr/autodesk/maya2011-x64/ to our .bashrc file.

UI Class
We are now going to develop a UI class to contain the UI developed using designer and then extend it to have our own functionality and methods for the program.
The basic outline of the class init method is as follows
class BatchRender(Ui_mainDialog):
 def __init__(self, _mayaPath=None):

  # @brief the name of the maya file to render
  self.m_mayaFile=""
  # @brief the name of the maya project directory
  self.m_mayaProject=""
  # @brief the optional name of the output directory
  self.m_outputDir=""
  # @brief the main ui object which contains our controls
  self.m_ui=Ui_mainDialog()
  # @brief we will use this to thread our render output
  self.m_process=QtCore.QProcess()
  # @brief a flag to indicate if we are rendering or not
  self.m_rendering=False
  # @brief the batch render command constructed from the maya path
  self.m_batchRender="%sbin/Render " %(_mayaPath)
  # now we call the setup UI to populate our gui
  self.m_ui.setupUi(MainDialog)

  print self.m_batchRender

This will construct the ui by calling the Ui_mainDialog constructor created from the pyuic4 command and then later call the setupUI command which is automatically generated from the pyuic compiler.

We can now update our main function to construct this object and build our dialog
if __name__ == "__main__":
 import sys
 app = QtGui.QApplication(sys.argv)

 ResourcePath=os.environ.get("MAYA_LOCATION")

 MainDialog = QtGui.QDialog()
 ui = BatchRender(ResourcePath)

#see if the ResourcePath is set and quite if not
 if ResourcePath == None :
  msgBox=QtGui.QMessageBox()
  msgBox.setText("The environment variable MAYA_LOCATION not set ")
  msgBox.show()
  sys.exit(app.exec_())

 else :

  MainDialog.show()
  sys.exit(app.exec_())

Connecting Buttons to Methods

Qt uses the signals and slots mechanism to connect UI component actions to methods within our classes. We must explicitly connect these elements for them to work. The following code section is from the __init__ method of the BatchRender class and show this in action.
# here we connect the controls on the UI to the methods in the class

QtCore.QObject.connect(self.m_ui.m_chooseFile, QtCore.SIGNAL("clicked()"), self.chooseFile)
QtCore.QObject.connect(self.m_ui.m_chooseProject, QtCore.SIGNAL("clicked()"), self.chooseProject)
QtCore.QObject.connect(self.m_ui.m_chooseOutputDir, QtCore.SIGNAL("clicked()"), self.chooseOutput)
QtCore.QObject.connect(self.m_ui.m_batchRender, QtCore.SIGNAL("clicked()"), self.doRender)
QtCore.QObject.connect(self.m_process, QtCore.SIGNAL("readyReadStandardOutput()"), self.updateDebugOutput)
QtCore.QObject.connect(self.m_process, QtCore.SIGNAL("readyReadStandardError()"), self.updateDebugOutput)
QtCore.QObject.connect(self.m_process, QtCore.SIGNAL("started()"), self.updateDebugOutput)
QtCore.QObject.connect(self.m_process, QtCore.SIGNAL("error()"), self.error)
QtCore.QObject.connect(self.m_process, QtCore.SIGNAL("finished()"), self.finished)

The m_process attribute has a number of signals to indicate the state of the process being run, this will be outlined later.

The Render process

For the batch render to run we must have a minimum of a filename and project directory set. We can check these value by seeing if the textEdit fields for each of these values are empty or not.

As part of this process we will also check to see if the startFrame value is >= endFrame value by querying the two spin boxes. The basic code for this is shown below
def doRender(self) :
  if self.m_rendering == True :
    self.m_ui.m_batchRender.setText("Batch Render");
    # stop the batch render process
    self.m_process.kill()
    # clear the output window
    self.m_ui.m_outputWindow.clear()
    self.m_rendering = False
  else :
    """ first we are going to check that we have the correct settings """
    if self.m_mayaFile =="" :
      self.errorDialog("no maya file set")
      return
    if self.m_mayaProject=="" :
      self.errorDialog("no Project directory set")
      return
    if self.m_ui.m_startFrame.value() >= self.m_ui.m_endFrame.value() :
      self.errorDialog("start Frame <= end Frame")
      return
  

If these fail we pop up a generic dialog error box using the following code
Using the following function
def errorDialog(self,_text) :
  QtGui.QMessageBox.about(None,"Warning", _text)

If the criteria above are correct we can construct the Batch Render command string, this is done by building up different elements for each of the argument flags as separate strings as follows.
print "Doing render"
self.m_ui.m_batchRender.setText("stop Batch Render");
# first we need to build up the render string
renderString=self.m_batchRender
frameRange="-fnc name.#.ext -s %d -e %d -b %d -pad %d " %(self.m_ui.m_startFrame.value(),
                                           self.m_ui.m_endFrame.value(),
                                           self.m_ui.m_byFrame.value(),
                                           self.m_ui.m_pad.value())
outputDir=""
if self.m_ui.m_outputDir.text() != "" :
  outputDir="-rd %s/ " %(self.m_ui.m_outputDir.text())
outputName=""
if self.m_ui.m_outputFileName.text() !="" :
  outputName="-im %s "%(self.m_ui.m_outputFileName.text())

extension=""
if self.m_ui.m_extension.currentIndex()!=0 :
  extension=" -of %s " %(self.m_ui.m_extension.currentText())

sceneData="-proj %s %s" %(self.m_mayaProject,self.m_mayaFile)

Renderers={0:"default",1:"mr",2:"file",3:"hw",4:"rman",5:"sw"}
rendererString="-renderer %s " %(Renderers.get(self.m_ui.m_renderer.currentIndex()))

arguments=frameRange+outputName+extension+rendererString+outputDir+sceneData;
commandString=renderString+arguments
self.m_ui.m_outputWindow.setText(commandString)

The combo box for the file extensions contain the correct values for the command argument, this means that the values may be used directly using the .currentText() method of the combo box.

However the renderer string is not correct so we make a dictionary of the correct values using a integer index as the key and the string for the correct values, we then use the currentIndex value to return the integer key value and use the dictionary get() method to retrieve the correct string.

QProcess

We wish to start the batch rendering as a separate process from the rest of the system. This is so that the UI will still respond to commands whilst the batch rendering process is running, and we can also update the debug window with the text from the batch render process.

When the class is constructed we create a QProcess object called m_process, this can then be started with the command line we created above using the following code.
self.m_process.start(commandString)
self.m_rendering = True
Once the process is started it will emit different signals which we can capture and respond too, we connected these signal together in the earlier code, the main one for the output of the batch render data is as follows
def updateDebugOutput(self) :

  data=self.m_process.readAllStandardOutput()
  s=QtCore.QString(data);
  self.m_ui.m_outputWindow.append(s)

  data=self.m_process.readAllStandardError()
  s=QtCore.QString(data);
  self.m_ui.m_outputWindow.append(s)

The maya batch renderer outputs most of the debug information on the stderr stream but some is also sent to the stdout stream so both streams are read to the data returned is converted to a string and added to the outputWindow.

The full code of this program can be downloaded from the following https://github.com/NCCA/MayaBatchRender  or a zip from here url.

GLSL Shader Manager design Part 3

(for some more refinements see this post)

In the previous post I discussed the design of the ShaderProgram class, in this final instalment I will discuss the overall ShaderManager class and it's basic usage.

The basic design consideration for the  manager is it will allow for the creation of Shaders and ShaderPrograms that will be independent of each other, these will be stored by name as a std::string and basic methods may be called to configure and load these.

It will also have the ability to access the ShaderPrograms via a lookup using the [] operator passing in the name of the shader.

The class diagram is as follows


The std::map containers for the shader programs and Shaders contain pointers to the classes so each of them can live as separate entities and Shader Programs may share multiple shaders.

Construction of the class will also create a nullProgram object as discussed in the previous post. The motivation for this is to allow the return of a class which will be a glUseProgram(0) but still be an object capable of being callable for methods without crashing.
ShaderManager::ShaderManager()
{
 m_debugState=true;
 m_nullProgram = new ShaderProgram("NULL");
}
To create a shader program we use the following method to construct a new ShaderProgram class

void ShaderManager::createShaderProgram(std::string _name)
{
 std::cerr<<"creating empty ShaderProgram "<<_name.c_str()<<"\n";
 m_shaderPrograms[_name]= new ShaderProgram(_name);
}
Next we can add shaders using the following methods
void ShaderManager::attachShader(
                                  std::string _name,
                                  SHADERTYPE _type
                                )
{
  m_shaders[_name]= new Shader(_name,_type);
}
At this stage the shader may not have any source attached to it so we have a method to allow the loading of the shader source.

void ShaderManager::loadShaderSource(std::string _shaderName, std::string _sourceFile)
{
  std::map <std::string, Shader * >::const_iterator shader=m_shaders.find(_shaderName);
  // make sure we have a valid shader and program
 if(shader!=m_shaders.end() )
  {
    shader->second->load(_sourceFile);
  }
  else {std::cerr<<"Warning shader not know in loadShaderSource "<<_shaderName.c_str();}
}

Once the source is loaded to compile the Shader we do the following
void ShaderManager::compileShader(std::string _name)
{
  // get an iterator to the shaders
  std::map <std::string, Shader * >::const_iterator shader=m_shaders.find(_name);
  // make sure we have a valid shader
 if(shader!=m_shaders.end())
  {
    // grab the pointer to the shader and call compile
    shader->second->compile();
  }
  else {std::cerr<<"Warning shader not know in compile "<<_name.c_str();}
}

Next we need to attach the shader to a program, as both classes are contained in maps we have to search for both and then call the relevant methods as show in the following code

void ShaderManager::attachShaderToProgram(std::string _program,std::string _shader)

{

  // get an iterator to the shader and program
  std::map <std::string, Shader * >::const_iterator shader=m_shaders.find(_shader);
  std::map <std::string, ShaderProgram * >::const_iterator program=m_shaderPrograms.find(_program);

  // make sure we have a valid shader and program
 if(shader!=m_shaders.end() && program !=m_shaderPrograms.end())
  {
    // now attach the shader to the program
    program->second->attatchShader(shader->second);
    // now increment the shader ref count so we know if how many references
    shader->second->incrementRefCount();

    if (m_debugState == true)
    {
      std::cerr<<_shader.c_str()<<" attached to program "<<_program.c_str()<<"\n";
    }
  }
  else {std::cerr<<"Warning cant attach "<<_shader.c_str() <<" to "<<_program.c_str()<<"\n";}
}

To overload the [] operator to work with a std::string we use the following
ShaderProgram * ShaderManager::operator[](const std::string &_programName)
{
  std::map <std::string, ShaderProgram * >::const_iterator program=m_shaderPrograms.find(_programName);
  // make sure we have a valid  program
 if(program!=m_shaderPrograms.end() )
  {
    return  program->second;
  }
  else
  {
    std::cerr<<"Warning Program not know in [] "<<_programName.c_str();
    std::cerr<<"returning a null program and hoping for the best\n";
    return m_nullProgram;
  }
}
For brevity the rest of the methods can be seen in the source download here (lecture 8)

Using the ShaderManager
The file GLWindow.cpp demonstrates the use of the ShaderManager, we must first initialise GLEW if we are using linux or windows (Mac OS X is fine)

The following example program is based on the code here and is basically OpenGL 3.x compatible, however I use GLSL Version 120 as the basis as mac OSX doesn't support higher at present (come on APPLE !)

We uses very basic shaders that allow the basic vertex and colour attributes to be named in the client program and the shader.
#version 120
attribute vec3 inPosition;
attribute vec3 inColour;
varying vec3 vertColour;

void main()
{
 gl_Position = vec4(inPosition, 1.0);
 vertColour = inColour;
}


#version 120

varying vec3 vertColour;

void main()
{
  gl_FragColor = vec4(vertColour,1.0);
}
Generating Vertex data
The vertex data is created using a vertex buffer object and a vertex array object these will then be bound to attributes in the shaders

void GLWindow::createTriangle()
{
 // First simple object
 float* vert = new float[9]; // vertex array
 float* col  = new float[9]; // color array

 vert[0] =-0.3; vert[1] = 0.5; vert[2] =-1.0;
 vert[3] =-0.8; vert[4] =-0.5; vert[5] =-1.0;
 vert[6] = 0.2; vert[7] =-0.5; vert[8]= -1.0;

 col[0] = 1.0; col[1] = 0.0; col[2] = 0.0;
 col[3] = 0.0; col[4] = 1.0; col[5] = 0.0;
 col[6] = 0.0; col[7] = 0.0; col[8] = 1.0;

 // Second simple object
 float* vert2 = new float[9]; // vertex array

 vert2[0] =-0.2; vert2[1] = 0.5; vert2[2] =-1.0;
 vert2[3] = 0.3; vert2[4] =-0.5; vert2[5] =-1.0;
 vert2[6] = 0.8; vert2[7] = 0.5; vert2[8]= -1.0;

 // Two VAOs allocation
  glGenVertexArrays(2, &m_vaoID[0]);

 // First VAO setup
  glBindVertexArray(m_vaoID[0]);

 glGenBuffers(2, m_vboID);

 glBindBuffer(GL_ARRAY_BUFFER, m_vboID[0]);
 glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW);
  m_shaderManager["Simple"]->vertexAttribPointer("inPosition",3,GL_FLOAT,0,0);
  m_shaderManager["Simple"]->enableAttribArray("inPosition");
 glBindBuffer(GL_ARRAY_BUFFER, m_vboID[1]);

 glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), col, GL_STATIC_DRAW);
  m_shaderManager["Simple"]->vertexAttribPointer("inColour",3,GL_FLOAT,0,0);
  m_shaderManager["Simple"]->enableAttribArray("inColour");

 // Second VAO setup
  glBindVertexArray(m_vaoID[1]);

 glGenBuffers(1, &m_vboID[2]);

 glBindBuffer(GL_ARRAY_BUFFER, m_vboID[2]);
  ceckGLError(__FILE__,__LINE__);
 glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert2, GL_STATIC_DRAW);
 glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
 glEnableVertexAttribArray(0);
  glBindVertexArray(0);

 delete [] vert;
 delete [] vert2;
 delete [] col;
}
Next we load and build the Shader program using the ShaderManager class
void GLWindow::initializeGL()
{
  ngl::NGLInit *init = ngl::NGLInit::instance();
  init->initGlew();
  glClearColor(0.4f, 0.4f, 0.4f, 1.0f);

  m_shaderManager.createShaderProgram("Simple");

  m_shaderManager.attachShader("SimpleVertex",VERTEX);
  m_shaderManager.attachShader("SimpleFragment",FRAGMENT);
  m_shaderManager.loadShaderSource("SimpleVertex","shaders/Vertex.vs");
  m_shaderManager.loadShaderSource("SimpleFragment","shaders/Fragment.fs");

  m_shaderManager.compileShader("SimpleVertex");
  m_shaderManager.compileShader("SimpleFragment");
  m_shaderManager.attachShaderToProgram("Simple","SimpleVertex");
  m_shaderManager.attachShaderToProgram("Simple","SimpleFragment");


  m_shaderManager.bindAttribute("Simple",0,"inPosition");
  m_shaderManager.bindAttribute("Simple",1,"inColour");

  m_shaderManager.linkProgramObject("Simple");
  m_shaderManager["Simple"]->use();

  createTriangle();
}
You will notice before we link the shader program we assign the inPosition and inColour attributes to the attribute locations 0 and 1 respectively. This will be used in the drawing routine later.

void GLWindow::paintGL()
{

  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  m_shaderManager["Simple"]->use();

  glBindVertexArray(m_vaoID[0]);  // select first VAO
  glDrawArrays(GL_TRIANGLES, 0, 3); // draw first object

  glBindVertexArray(m_vaoID[1]);  // select second VAO
  m_shaderManager["Simple"]->vertexAttrib3f("inColour",1.0,0.0,0.0);
  //glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0); // set constant color attribute
  glDrawArrays(GL_TRIANGLES, 0, 3); // draw second object
}

Next Step
One of the initial goals of this design was for a standalone version of a shader manager that people could use in their own project, with very few external library dependancies (at present we only need OpenGL, GLEW and Qt for the project and some string IO).

The next step is to integrate this into my existing ngl:: library and replace the existing ShaderManager class. This is going to happen very soon and I will post an update on the integration and a fuller critique of the design once this integration has happened, and finally a teapot