CMAC History         Explanation             Code              Chris's Research                 Publications

Chris is interested in further developing the type of artificial neural network known as CMAC and applying it to a wide variety of control problems. 

History

In 1972 James Albus invented the Cerebellar Model Articulation Controller, a computer algorithm designed to mimic the neural pathways he observed when dissecting a human cerebellum.  The cerebellum is the part of our brain that knows the right nerve signals to send to our muscles to accomplish the body movements we desire i.e. it is the control system of the brain.  Later, Albus envisioned even more applications of his algorithm and renamed it the Cerebellar Model Arithmetic Computer, but still maintaining the acronym CMAC.  He was slightly ahead of his time, as computer technology was not quite advanced enough to run the algorithm in a real-time controller.   In the 1980s Thomas Miller III, at the University of New Hampshire, brought the CMAC to the wider attention of the control systems and robotics communities by demonstrating successful experimental results, at first with an industrial robotic manipulator and later with a biped walking robot.  Since CMAC can both learn quickly and handle many inputs, it would appear to be superior to other types neural networks for control systems applications.  However, CMAC was not widely embraced due to its tendency for overlearning, a problem that Chris has endeavoured to fix...

Screen Shot 2017-09-24 at 9.41.57 PM.png

Explanation

The basic building block of an n-input CMAC is an N-dimensional array (or look-up table),  where each array is divided into q quantizations per input.  The CMAC uses m offset arrays (or layers) to achieve the approximation and generalizationn properties of a neural network (Fig. 1 above). Each hypercube cell in an array defines the domain of a basis function; the original CMAC used binary basis functions, but contemporary CMAC design typically utilizes basis functions that go to zero at the cell boundaries i.e. triangular or spline (Fig. 2 below).   The CMAC is an associative memory and works in a similar fashion to radial-basis-function (RBF) networks: each input defines a dimension of a basis function and a weighted sum of basis functions provides the output.   However, the CMAC avoids the ``curse of dimensionality'' found with RBF networks in on-line calculations by considering only the one-per-array m indexed, or activated, cells (rather than all mq^n)  to calculate the output.  In addition, the CMAC uses a random-hash coding scheme to avoid allocating memory for all the cells, the vast majority of which will never be accessed as input trajectories move through high-dimensional space.

Screen Shot 2017-09-24 at 9.33.28 PM.png

Code (Matlab)

global BETA NU
global NUM_LAYERS NUM_Q MEMSIZE NUM_INPUTS
global MAX_STATE MIN_STATE HASHTABLE memory

initialize_cmac();   %see function below

states=define_initial_state(); %function not provided here

for i=1:length_of_simulation

     time=time+time_step;

     desired_states=trajectory(time); %function not provided here

     inputs=[states; desired_states];

     training_error=calculate_error(states, desired_states); %function not provided here

     [basis_functions locations]=cmac(inputs); %see function below

     cmac_output=0;

     for j=1:NUM_LAYERS %calculate the output
         cmac_output=cmac_output+basis_function(j)*memory(locations(j));
     end;

     states=simulation_or_hardware_one_time_step(states, cmac_output); %function not provided

     for j=1:NUM_LAYERS %update the activated weights
         weight=memory(locations(j));
         memory(locations(j))=weight+BETA*(basis_function(j)*training_error-NU*weight);
    end;

end; %for i=1:length_of_simulation

function initialize_cmac()

    global BETA NU
    global NUM_LAYERS NUM_Q MEMSIZE NUM_INPUTS
    global MAX_STATE MIN_STATE HASHTABLE memory

    BETA=1;  %adaptation/learning gain
    NU=0.01;  %if NU is too small, overlearning (weight drift and bursting)will occur

    NUM_LAYERS = 100;     %Number of arrays or layers - somewhere between 50 and 500 usually
    NUM_Q = 10;                 %Number of quantizations per input, keep 10 if using the 'rule' below
    MEMSIZE = 50000;         %Size of physical memory - make as big as possible

    %rule of thumb: make the size of CMAC dimension about 10 times the size of the actual...
    %...input range you forsee, if you are using NUM_Q=10

    MAX_STATE=[0.3 0.45 0.2 4.2];  %fill in your own appropriate values
    MIN_STATE=[-0.3 -0.45 -0.67 1.2]; %fill in your own appropriate values
   NUM_INPUTS = length(MAX_STATE);  % or try using less and see if performance improves

    % initialize static CMAC hashtable array
    for i = 1:NUM_INPUTS*NUM_LAYERS*NUM_Q
        HASHTABLE(i) = 1000000.0*rand();
    end

     % choose CMAC array offsets - random is fine
         for i = 1:NUM_INPUTS
             for j = 1:NUM_LAYERS
                 if j == 1
                     OFFSET(i,j) = 0.5;
                else
                    OFFSET(i,j) = rand();
                end
            end
         end

    memory=zeros(MEMSIZE,1);

end; %end of initialize_cmac

function [basis_functions locations]=cmac(original_inputs)

     global NUM_LAYERS NUM_Q MEMSIZE NUM_INPUTS
     global MAX_STATE MIN_STATE HASHTABLE

     % normalize instantaneous input within input range
     input = (original_inputs(1:NUM_INPUTS) - MIN_STATE(1:NUM_INPUTS));
     input = input./(MAX_STATE(1:NUM_INPUTS) - MIN_STATE(1:NUM_INPUTS));
     input = min(input, ones(NUM_INPUTS,1));     
     input = max(input ,zeros(NUM_INPUTS,1));

    for j = 1:NUM_LAYERS %number of arrays or layers
    
        basis_functions(j) = 1;
        add_locations = 0;
    
        for i = 1:NUM_INPUTS %number of inputs or dimensions
            place = input(i)*(NUM_Q - 1) + OFFSET(i, j); %locate position within activated cell
            cell(i) = floor(place);      %locate activated cell number
            h = place - cell(i);  %yield position within activated cell on this dimension (between 0 and 1)
            func = 16*(h*h - 2.0*h*h*h + h*h*h*h); %calculate spline output
            basis_functions(j) = basis_functions(j)*func; %creating multi-dimensional basis function
            index=cell(i) + 1 + NUM_Q*(j-1) +NUM_Q*NUM_LAYERS*(i-1);
            add_locations = add_locations + HASHTABLE(index); %create summation for hash coding
        end; %for i = 1:NUM_INPUTS
    
        locations(j) = int32( floor( mod(add_locations,MEMSIZE-1)) ) + 1; %hash coding

    end %for each layer
    basis_functions= basis_functions./sum(basis_functions); %normalize basis functions

end; %cmac

 

Research

Chris has proposed several methods for preventing overlearning in CMAC (see publications below) which should allow CMAC to reach its full potential in a wide variety of control applications.  Researchers and engineers can use these methods to design adaptive controls for many types of nonlinear systems where the nonlinearities and parameters are unknown, and disturbances are significant.  Current research efforts are geared towards quantifying stability bounds for these methods and testing them in a wide variety of applications.

Publications

Here are some links to publications where CMAC was utilized and/or improved.

Macnab Publications Macnab, C.J.B. and S. Razmi, Control of a flexible-joint robot using a stable adaptive introspective CMAC.
    IEEE Conference on Systems, Man, and Cybernetics
    (Banff), 2017.

Macnab, C.J.B, Giving CMAC basis functions a tail in order to prevent bursting in neural-adaptive control.
    IEEE Conference on Systems, Man, and Cybernetics
    (Banff), 2017.

Macnab, C.J.B. Modifying CMAC adaptive control with weight smoothing in order to avoid overlearning and bursting.
    Neural Computing and Applications
   
DOI: 10.1007/s00521-017-3182-6, 2017.

Macnab, C.JB. CMAC Control of a quadrotor helicopter using a stable robust weight-smoothing algorithm.
    IEEE Conference on Control Technology and Applications   
    (Hawaii), 2017.  

Macnab, C.J.B. Comments on “An intelligent CMAC-PD torque controller with anti-over-learning scheme for electric load simulator” .
    Transactions of the Institute of Measurement and Control
    DOI: 10.1177/0142331217692029, 2017.

Macnab, C.J.B. Preventing overlearning in CMAC by using a short-term memory.
    International Journal of Fuzzy Systems
    DOI: 10.1007/s40815-016-0275-9, 2017.

Macnab, C.J.B. Creating a CMAC with overlapping basis functions in order to prevent weight drift.
    Soft Computing
    Vol 21, No. 16, pp. 4593-4600, 2017.


Clark, T. and C.J.B. Macnab.  Cerebellar Model Articulation Controller with introspective voting weight updates for quadrotor application.
     IEEE Information Technology, Electronics and Mobile Communication Conference
    (Vancouver) pp. 1-7, 2016.

Macnab, C.J.B. Using RBFs in a CMAC to prevent parameter drift in adaptive control.
    Neurocomputing paper.dvi
   Vol. 205, pp. 45-52, 2016.

Masaud, K. and C.J.B. Macnab. Preventing bursting in adaptive control using an introspective neural network algorithm,
    Neurocomputing
    Vol. 136, pp. 300–314, 2014.

Mirghasemi, S., C.J.B.  Macnab, and A. Chu. Dissolved oxygen control of activated sludge bioreactors using neural-adaptive control.
    IEEE Symposium on Computational Intelligence in Control and Automation
    (Orlando, Florida) pp. 1 - 6, 2014.

Macnab, C.J.B. Stable neural-adaptive control of activated sludge bioreactors
    American Control Conference
    (Portland, Oregon) pp. 2869 - 2874, 2014.

Macnab, C.J.B. An introspective algorithm for achieving low-gain high-performance robust neural-adaptive control,
    American Control Conference
    (Portland, Oregon) pp. 2893 - 2899, 2014.

Richert, D., K. Masaud, and C. J. B. Macnab. Discrete-time weight updates in neural-adaptive control.
    Soft Computing
    Vol. 17, No. 3, pp. 431-444, 2013.

Mahmoodi Takaghaj, S., C.J.B. Macnab, D. Westwick, I. Boiko, Neural-adaptive control of waste-to-energy boilers,
   Proc. IEEE Conf. on Decision and Control,
   (Maui), pp. 5367 - 5373, Dec. 2012.

Masaud, K., and C.J.B. Macnab, An introspective learning algorithm that achieves robust adaptive control of a quadrotor helicopter,
    AIAA infotech@aerospace,
    (Anaheim)  DOI: 10.2514/6.2012-2518  June 2012.

Masaud, K., and C.J.B. Macnab,  Stable fuzzy-adaptive control using an introspective algorithm,
    Proc. American Control Conference,
    (Montreal) pp. 5622 -5627, June 2012.

Nicol, C.,  C.J.B. Macnab, and A. Ramirez-Serrano. Robust adaptive control of a quadrotor helicopter.
     Mechatronics
     Vol. 21, No. 6, pp. 927-938, 2011.

Macnab, C.J.B., Neural-adaptive control using alternate weights.
     Neural Computing and Applications,
    Vol. 20, No. 2, pp. 211-231, 2011.

Coza, C., C. Nicol, C.J.B Macnab, and A. Ramirez-Serrano. Adaptive fuzzy control for a quadrotor helicopter robust to wind buffeting.
    Journal of Intelligent and Fuzzy Systems,
     Vol. 22, No. 5-6, pp. 267-283, 2011.


Macnab, C.J.B.  Improved output tracking of a flexible-joint arm using neural networks.
    Neural Processing Letters,
    Vol. 32 (2), pp. 201-218, 2010.

Richert, D., A. Beirami, and C.J.B. Macnab, Neural-adaptive control of robotic manipulators using a supervisory inertia matrix
    Proc. Int. Conf. Autonomous Robots and Agents
    (Wellington, New Zealand) pp. 634 - 663, 2009.  

Nicol, C., C.J.B. Macnab, and A. Ramirez-Serrano,  Robust neural-network control of a quadrotor helicopter.
    Proc. IEEE Canadian Conference on Electrical and Computer Engineering
   (Niagara Falls), pp. 1233 - 1238, 2008


Macnab, C.J.B, Robust associative-memory adaptive control in the presence of persistent oscillations.
    Neural Information Processing Letters and Reviews,
    10(12), pp. 277-287, 2006.

Macnab, C.J.B., Direct neural-adaptive control with quantifiable bounds and improved performance.
    Int. Joint Conference on Neural Networks,
    (Vancouver),  pp. 4456- 4462, 2006.

Coza, C. and C.J.B. Macnab, A new robust adaptive-fuzzy control method applied to quadrotor helicopter stabilization.
     Proc. North American Fuzzy Information Processing Society Annual Meeting
    (Montreal) pp. 454-458, 2006.

Beirami, A. and C.J.B. Macnab direct neural-adaptive control of robotic manipulators using a forward dynamics approach.
     Proc. IEEE Canadian Conference on Electrical and Computer Engineering,
     (Ottawa) pp. 363-367, 2006.


Macnab, C.J.B, A new robust weight update for neural-network control.
     Proc. IASTED International Conference on Intelligent Systems and Control
     (Cambridge, Mass.) pp. 360-367, 2005.

Macnab, C.J.B., Local basis functions in adaptive control of elastic systems.
     Proc. IEEE International Conference on Mechatronics and Automation
     (Niagara Falls) pp. 19 - 25, 2005.

Macnab, C.J.B., Getting weights to behave themselves: Achieving stability and performance in neural-adaptive control when inputs oscillate.
    Proc. American Control Conference
    (Portland, Oregon) pp. 3192 - 3197, 2005.

Macnab, C.J.B., D'Eleuterio, G.M.T., and Meng, M., CMAC neurocontrol of elastic-joint robots using backstepping with tuning functions.
    Proc. IEEE International  Conference on Robotics and Automation
    (New Orleans), pp. 2679 - 2686, 2004.

Macnab, C.J.B., D'Eleuterio, G.M.T., Neuroadaptive control of elastic-joint robots using robust performance enhancement.
    Robotica
    vol. 19, no. 6,  pp. 619-629, Sept. 2001.

Macnab, C.J.B., D'Eleuterio, G.M.T., Discrete-time Lyapunov design for neuroadaptive control of elastic-joint robots.
    International Journal of Robotics Research
    vol. 19, no. 5,   pp. 511-525, May 2000.

Macnab, C.J.B., D'Eleuterio, G.M.T., Meng, M., Using backstepping for control of elastic-joint robots with smaller gear ratios.
    Proc. IEEE Canadian Conference on Electrical and Computer Engineering
    (Edmonton, Alberta), pp. 873 - 878, 1999.

Macnab, C.J.B., D'Eleuterio, G.M.T., Stable, on-line learning using CMACs for neuroadaptive tracking control of flexible joint manipulators.
    Proc. IEEE International Conference on Robotics and Automation
    (Leuven, Belgium) 1, pp. 511-517, 1998.