Symposium on

Image Processing, Image Analysis and

Real-Time Imaging (IPIARTI) 2013

Symposium on Acoustic, Speech and

Signal Processing (SASSP) 2013


DATE: Thursday 9 May 2013

VENUE: Universiti Tenaga Nasional, Putrajaya Campus, Selangor


Keynote #1:

The Status of Digital Watermarking,

Dr. Ton Kalker, VP, Security and DRM, DTS Inc., USA.

Keynote #2:

Technologies in Cardiac Imaging,

Prof. Dr. Ir. Eko Suprianto, Director, IJN-UTM Cardiovascular Engineering Centre, UTM

Keynote #3:

From theory to practice – Experiences with the DSP-Microcontroller for Mechatronic systems,

Dr. Farrukh Hafiz Nagi, Associate Professor, UNiversiti Tenaga Nasional, Putrajaya Campus


Details can be found HERE












Keynote #1:

Photo Forensics: There is more to a picture than meets the  eye

Prof. Nasir D Memon

Polytechnic Institute of New York, USA


Keynote #2:

Spectral Approach to Color and Lighting

Prof. Jussi Parkkinen

Monash University, Malaysia


Keynote #3:

Biometric Rich Gestures: A touching farewell to passwords?

Prof. Nasir D Memon

Polytechnic Institute of New York, USA


Keynote #4:

Assessing The Extent of Uniqueness of A Fingerprint Match

Assoc. Prof. Dr. Sarat C. Dass

Michigan State University, USA

Regular Presentations:

1) An application of the new Signal Processing Technique Hilbert Huang Transform (HHT) to Machine Tool  Condition Monitoring - Joseph Emerson Raja (MMU)

2) GUI System for Enhancing Blood Vessels Segmentation in Digital Fundus Images - Ahmad Zikri Rozlan (UiTM)

3) Noise Removal for Weather Degraded Image - Mohd Helmy Abd Wahab (UTHM)












Keynote #1:

Image Analysis by Orthogonal Moments and Implementation of them by Digital Filters

Prof. P. Raveendran

Universiti Malaya (UM)

Keynote #2:

Semantic Technology for Image Understanding

Dr. Dickson Lukose



Keynote #3:

Advanced Image Correlation Filters

Prof. Salina Abdul Samad

Universiti Kebangsaan Malaysia (UKM)


Regular Presentations:

1) A robust texture feature extraction using the localized angular phase - Khairul Muzzammil (UTEM)

2) Tree Nutrients Prediction by Image Analysis - Lee Aik Leng (MMU)

3) An Improved Medical Image Compression Algorithm using PCA Neural Network - Yeo Weng Kwong (UTEM)

4) Hilbert Huang Transform, a new and promising technique for non-linear and non-stationary signal analysis - Emerson Raja Joseph (MMU)

5) A Contrast Enhancement Technique for Infrared Thermography - Lo Tzer Yuan (MMU)

6) Analyzing Graphic Design Hidden Rules and Popular Beliefs in Contemporary Packaging Design for Various Local Detergent Product - Hafizul Idham & Saiful Akram Bin Che Cob (UiTM)

7) Computation of Uncertainty of Physiographic Features Extracted from Multiscale Digital Elevation Models Using Fuzzy Classification - Dinesh Sathyamoorthy (STRIDE)

8) Classification Algorithm for Papaya Ripeness Determination Using Digital Colour Analysis - Low Cheng Seng (MMU)

9) Development of Control System of Continuous Sterilizer of Palm Oil Mill Using Image Processing Technique - Dr. Saad A. Abbas (UTM)













Biomedical Engineering – Medical Image Analysis in Clinical Practice

Prof. Dr. Ahmad Fadzil Hani

Universiti Teknologi Petronas

Face Recognition Under Variant Head Poses

Assoc. Prof. Dr. Syed Abdul Rahman Syed Abu Bakar

Universiti Teknologi Malaysia

Statistical Multivariate Technique for Some Image Analysis Problems

Assoc. Prof. Dr. Omar Mohd Rijal

Universiti Malaya

Development of Shape Extraction Algorithms for Trademark Image Search System Application

Assoc. Prof. Dr. Mohammad Faizal Ahmad Fauzi

Multimedia University

FPGA-based Architectures for 3-D Medical Image Compression

Dr. Afandi Ahmad

Universiti Tun Hussein Onn Malaysia (UTHM)

Performance Evaluation of PCA and Histogram of Oriented Gradient based Pedestrian Classification

Mohd Haris Lye Abdullah

Multimedia University

Exploring Nearest-Neighbor Distance for Histogram- based Fruit Ripeness Identification

Fatma Susilawati Mohamad

Universiti Teknologi Malaysia

A Novel Algorithm for Finding Critical Points of Online Jawi/Persian/Arabic Handwritten Character using in Feature Extraction

Majid Harouni

Universiti Teknologi Malaysia

Color Image Indexing And Retrieval Of  Documents Captured From Consumer Handheld Devices

Danial Md Nor

Universiti Tun Hussein Onn Malaysia (UTHM)








Disease Detection Using Artificial Neural Network

Assoc. Prof. Dr. R.Logeswaran N.Rajasvaran

Multimedia University


Statistical Multivariate Technique for Some Image Analysis Problems

Assoc. Prof. Dr. Omar Mohd Rijal

Universiti Malaya



Blind System Identification (Deconvolution) For Thermocouple Sensors - A Research Area Involves Signal Processing For Sensor Application

Dr. Seán McLoone (National University of Ireland Maynooth)

Date: 28 June 2010 (Monday)


Venue: Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Skudai, Johor.


Abstract: In conventional automotive vehicle applications, only the measurement of low frequency temperature variations is usually required, and standard robust sensors such as thermocouples, resistance temperature detectors (RTD) and thermistors suffice. However, recent advances in engine design have resulted in the need for robust temperature sensors that have fast response characteristics. An important example where such sensors are now required for control and diagnostics would include on-board diagnosis (OBD) of catalyst malfunction.


In many sensors, the smaller the sensing elements, the faster will be the response but at the expense of durability and ease of manufacture. Therefore, most sensors involve a compromise between performance and the conflicting requirements for ruggedness and low cost. Experimental work on exhaust systems by the Internal Combustion Engines Research Group at QUB showed that during transient operation, conventional thermocouple sensors gave errors of up to 200°C. A reduction in wire diameter dramatically improved the accuracy, but in the harsh environment of an exhaust system, a lower limit to the diameter is quickly reached, below which sensor failure occurs.


We are researching a completely novel discrete-time linear identification framework, which allows insitu dual sensor characteristation. This eliminates the major shortcoming of other approaches which require that the dual sensor characteristics are known a priori. Extensive simulation studies have shown that the new methods reduce the sensitivity to noise on the inputs. Much of our recent theoretical work is concerned with the evaluation of alternative identification schemes. Regular Least Squares (LS) has proved unsatisfactory as it produces biased parameter estimates while more powerful techniques such as Generalised Total Least Squares, which accommodate coloured input and output noises, have been shown to provide bias-free estimates.



Advances in Biomedical Engineering


Date: 22nd April 2010 (Thursday)


Venue: AIMST University, Kedah


Topics (Speakers):


ANN Detection for Liver Disease

Assoc. Prof. Dr. R.Logeswaran N.Rajasvaran

Multimedia University


Technology & Application of Micro CT scan

Ms.Caterine Ong

Hi-Tech Instruments Sdn.Bhd.


Time-Frequency Signal Processing in Bio-Medical Applications

Mr. Mahendra V.Chilukuri

Multimedia University


Recent Advances in Biomaterials

Profesor Hj. Zainal Arifin b. Ahmad

Universiti Sains Malaysia


Abstract: Biomedical engineering involves the application of the principles and scientific techniques of engineering to the enhancement of medical science as applied to humans or animals. It involves an interdisciplinary approach which combines the engineering sciences, mechanics, design, modelling and problem-solving skills employed in engineering with medical and biological sciences so as to improve the health, lifestyle and quality-of-life of individuals. Biomedical engineering is a relatively new field, and involves a whole spectrum of disciplines covering: medical imaging, image processing, artificial intelligence, neural network, physiological signal processing, biomechanics, biomaterials, bioinformatics and bioengineering, systems analysis, 3-D modelling, etc. Combining these disciplines, systematically and synergistically yields total benefits which are much greater than the sum of the individual components. Prime examples of the successful application of biomedical engineering include the development and manufacture of biocompatible prostheses, medical devices, diagnostic devices and imaging equipment and pharmaceutical drugs.


Deploying Textual Mathematics on Real-Time Embedded Hardware with LabVIEW MathScriptRT Module


Date: 24th FEBRUARY 2010


Abstract: Using textual mathematical software for signal processing and analysis has become increasingly important in research and development for many engineers and scientists. However, the challenge most engineers and scientists face is to implement their textual mathematical algorithms into the real-world on embedded hardware. In this technical talk, the deployment of textual mathematics on real-time embedded hardware using the new LabVIEW MathScriptRT Module will be explored and real-world engineering applications will be discussed.



Music Tracking in Audio Streams


Date: 11th DECEMBER 2009


Abstract: The problem of music tracking in audio streams has recently attracted a lot of attention, mainly in the context of audio content characterization applications. Intelligent browsing of audio streams, automatic audio content annotation/indexing, querying audio streams by audio example and copyright management are some of the tasks that can benefit from efficient music tracking algorithms.

In this talk wee will present some recent advances in music tracking systems in a) the context of music/speech discrimination in radio recordings and b) in the context of music detection in audio sound tracks in films and video recordings. The latter is a harder task, since, besides speech, a diversity of sound sources are involved.


Adaptive Learning in a World of Projections


Date: 11th DECEMBER 2009


Abstract: The task of parameter/function estimation has been at the center of scientific attention for a long time and it comes under different names such as filtering, prediction, beamforming, curve fitting, classification, regression.

In this talk, the estimation task is treated in the context of set theoretic estimation arguments. Instead of a single optimal point, we are searching for a set of solutions that are in agreement with the available information, which is provided to us in the form of a set of training points and a set of constraints. Each point in the training data set, as well as each one of the constraints, is associated with a convex set, constructed according to a (convex) loss function (differentiable or not).

The goal of this talk is to present a general tool for parameter/function estimation, under a set of convex constraints, both for classification as well as regression tasks, in a time adaptive setting in (infinite dimensional) Reproducing Kernel Hilbert spaces (RKHS).

The algorithmic scheme consists of a sequence of projections, of linear complexity with respect to the number of unknown parameters. Our theory proves that such a scheme converges to the intersection of all (with the possible exception of a finite number of) the convex sets, where the required solution lies. The performance of the methodology is demonstrated in the context of nonlinear classification and robust beamforming in communication systems.

The work has been carried out in cooperation with Kostas Slavakis and Isao Yamada.


Smart Space with Signal Processing for Human Behavior Monitoring

ASSOC. Prof. WEE SER, Nanyang Technological University, SINGAPORE

Date: 18th November 2009


Abstract: Imagine a physical space filled with “experts” whose role is to provide personalized services to a targeted group of human subjects in that space. With the “experts”, such a space will be smart enough to take care of the needs of the targeted human subjects in that space. The question is: “can we do it without involving real human experts (e.g. nurses)” as the later are valuable and may be put to better use. This talk will present a vision for such a futuristic smart space. Basically, the idea is to equip the space with sensors and a signal processing system, that perform the functions of “eyes”, “ears”, other “sensors”, and “brain”. Findings of some ongoing projects that attempt to enable such a smart space will be used to illustrate some of the concepts. The example used for the space in this talk is a home, and the personalized service enabled is the monitoring of the daily living activities for the elderly for healthcare purposes. The talk will discuss the possible system requirements as well as research challenges. Video clips will also be shown on some preliminary results obtained on the detection and recognition of the behaviors of human subjects in a closed room setting.



Recent Advances in Neural and Cognitive Engineering

Prof. Daniel J. Strauss, Saarland University Hospital, Germany

Date: 4th November 2008

Venue: Universiti Teknologi Malaysia, KUALA LUMPUR

Abstract: In this talk we will present recent modeling and analysis techniques used in neural and cognitive engineering as emerging fields of biomedical engineering. The neurodynamics of the brain function covers spatio-temporal scales from the level of synaptic activity to the level of surface electroencephalographic correlates. A variety of multiscale computational methods have been developed in different scientific disciplines with a large impact in the modeling and analysis of the brain dynamics, e.g., to disclose multiscale phenomena underlying the electroencephalographic generation or to improve the noninvasive medical neurodiagnostics and therapy using electroencephalographic methods. This talk will focus on recent developments in neurophysiological and neuropsychological multiscale electroencephalographic modeling and analysis using neural fields, corticothalamic feedback dynamics, and multiscale waveform decomposition techniques. In particular, we preset the application of these concepts to current problems related to auditory processing and perception.


The Particle Filtering Methodology in Signal Processing

Prof. Petar M. Djuric, Stony Brook University, USA

Date: 1st August 2008

Venue: Multimedia University, Cyberjaya, Selangor.

Abstract: Particle filtering is a Monte Carlo – based methodology for sequential signal processing. It is designed for estimation of hidden processes that are dynamic and that can exhibit most severe nonlinearities. Also, it can be applied with equal ease to problems that involve any type of probability distributions. Therefore, it is not surprising that particle filtering has gained immense popularity. In this talk, first, the basics of particle filtering will be provided with description of its essential steps. Then some important topics of the theory will be addressed including Rao-Blackwellization, smoothing, and estimation of constant parameters.  Finally, a presentation of most recent advances in the theory will be given. The talk will contain signal processing examples which will aid in gaining valuable insights about the methodology.




DATE: 31ST JULY 2008


Abstract: Statistical signal processing is a field of signal processing that applies probability theory and statistics for extracting information from observed data in as accurate way as possible. In this talk, the basics of the field will be reviewed and many examples if its use in practice will be provided.


Challenges and Trends in Biometrics

Prof. Dr. Salina Samad, Universiti Kebangsaan Malaysia

Date: 30th June 2008

Venue: Multimedia University, Cyberjaya, Selangor.

Abstract: Biometrics in the security industry refers to measurable physical characteristic or personal behavioral trait used to recognize an identity or verify a claimed identity. Biometrics is an alternative to more traditional methods of identifying a person since it is based on something that a person is, not on what he owns or has to remember, such as keys, passwords or PINs. Signal processing and pattern recognition techniques along with sensor design make up the core technologies for biometrics. An overview of biometrics is covered, from historical perspective to current usage. The trends associated with biometric applications highlight the challenges that researchers have to overcome in order for biometrics to be a viable technology of the future. Accuracy, reliability and security issues that arise from using biometrics pose challenges that can be addressed using new algorithms which include vitality detection, multi-biometrics and encryption. As the technology matures and standards are in place, applications using biometrics may become ubiquitous worldwide.


Acoustic Signal Processing for Next-Generation Multichannel Human/Machine Interfaces

Prof. Dr. Ing Walter Kellermann,

University Erlangen-Nuremberg, Germany.

Date: 3rd January 2008

Venue: Universiti Teknologi Malaysia, KL

Abstract: The acoustic interface for future multimedia and communication terminals should be hands-free and as natural as possible, which implies that the user should be free to move and should not need to wear any devices. For digital signal processing this poses major challenges both for signal acquisition and reproduction, which reach far beyond the current state of the technology. For ideal acquisition of an acoustic source signal in noisy and reverberant environments, we need to compensate acoustic echoes, suppress noise and interferences and we would like to dereverberate the desired source signal.  On the other hand, for a perfect reproduction of real or virtual acoustic scenes we need to create desired sound signals at the listeners ears, while at the same time we have to remove undesired reverberance and to suppress local noise. In this talk we will briefly analyze the fundamental problems for signal processing in the framework of MIMO (multiple input - multiple output) systems and discuss current solutions. In accordance with ongoing research we emphasize nonlinear and multichannel acoustic echo cancellation, as well as microphone array signal processing for beamforming, interference suppression, blind source separation, and source localization.


Tackling the Acoustic Front-end for Distant-Talking Automatic Speech Recognition

Prof. Dr. Ing Walter Kellermann,

University Erlangen-Nuremberg, Germany.

Date: 3rd January 2008

Venue: Universiti Teknologi Malaysia, KL

Abstract: With the ever-growing interest in 'natural' hands-free acoustic human/machine interfaces, the need for according distant-talking automatic speech recognition (ASR) systems increases. Considering interactive TV as a challenging exemplary application scenario, we investigate the structural problems presented by noisy and reverberant multi-source environments with unpredictable interference and acoustic echoes of loudspeaker signals, and discuss current acoustic signal processing techniques to enhance the input to the actual ASR system. Special attention is paid to reverberation, which affects speech recognizers much more than human listeners, and a recently published method incorporating a reverberation model on the feature level of ASR is discussed.







Abstract: Some fundamentals, current techniques, and perspectives for the future  will be presented for the following topics:

  • Human speech production and hearing
  • Representation of speech and audio signals
  • Source coding techniques
  • Speech recognition strategies
  • Speech synthesis methods
  • Signal enhancement techniques



Towards a Definition of a Vascular-Health Index Using Photoplethysmography

Assoc. Prof. Dr. Edmond ZahedI, Universiti Malaya.

Date: 31st May 2007

Venue: Universiti Teknologi Malaysia, KL

Abstract: Non-invasive, direct vascular characterization of patients -where the ultimate objective is "to provide a totally non-invasive instrument to assist the physician in the diagnostic with reliable estimates of the mechanical properties of the vascular bed"- seems to have always remained an elusive target. Although attempts have been made to find non-invasive, clinically meaningful parameters since the seventies, it is only in the previous decade that digital signal processing tools have been so readily available. Owing to this progress, complex software processing functions are put at the disposition of researchers without them being necessarily professional programmers. On the other hand, advances in non-invasive instrumentation have paved the way to indirect yet accurate measurements of essential parameters such as blood flow and pressure. The Windkessel (Wk) model is widely used for modeling the vasculature: it elegantly accounts for the relevant parameters in vascular characterization, namely: arterial compliance, resistance and blood inertance.


Statistical Methods in PTB Detection Using Digital Chest Radiograph

Assoc. Prof. Dr. Omar Mohd Rijal (Universiti Malaya)

Date: 22nd May 2007

Venue: Multimedia University

Abstract: Two million deaths are due to tuberculosis (TB) every year.  About one million new cases of lung cancer have been detected annually.  Despite rapid advances in medical imaging technology, the conventional chest radiograph is still an important ingredient in the diagnosis of lung ailments.  Further, it is well known that mainly experienced medical officers are capable of accurately detecting MTB, and similarly early-stage LC from chest radiographs.  The immediate problem with the use of X-rays involves the use of considerable visual interpretation. Studies have shown that the accuracy of the interpretation is subject to varying degrees of observer error.  This error includes the observer’s inability to detect abnormal opacities and interpret them correctly, inter-observer variation (due to varying reading ability between observers) and intra-observer variation. This talk is about applying statistical ideas as an alternative method for the detection of PTB, useful for the lesser experienced medical staff .  In particular a graphical method involving wavelet coefficients on the feature vector (WFV) has been proposed for the detection and discrimination of Mycobacterium Tuberculosis (MTB) and lung cancer (LC).  Popular discrimination procedures use the Linear Discriminant Function (LDF( )) and the Quadratic Discriminant Function (QDF( )).  These discrimination procedures do not reconsider the membership status of misclassified cases.  This paper proposes a novel sequential discrimination procedure involving the MRA of the WFV (vector  ). The results indicate that the proposed new procedure, after reconsidering misclassified cases, can significantly increase the rates of correct classification.


Monitoring PTB Disease by Comparing Digital Chest Radiograph

Norliza Mohd Noor, Universiti Teknologi Malaysia.

Date: 22nd May 2007

Venue: Multimedia University

Abstract: Two million deaths are due to MTB annually.  Global TB incidence is still growing at 1% a year.  To eliminate the problem of TB the WHO makes several suggestions, in particular “Giving access to quality TB diagnosis and treatment for all”.  An important ingredient for diagnosis of TB is the comparison of a series of chest X-rays.  If treatment is successful, the presence of “snowflakes” will decrease or diminish with each subsequent (new) image.  In other words it is important that we have a reliable method of comparing X-ray images.  Several problems have to be faced before any comparison may be made.  Firstly, the diseased area or snowflakes do not subscribe to any fixed dimensions (shape, size, or orientation).  As such two images may only be compared by their direct difference since no obvious feature may be considered.  In particular if treatment is successful, the incidence of snowflakes shows a reduction in the second image.  Any measure of this reduction may be used to indicate success of treatment. Digital images of chest radiograph taken at different time points may be compared to investigate the effect of treatment on mycobacterium tuberculosis (MTB) patients.  One method of comparison is that of visually locating “snow-flakes” which should decrease in area or size with each subsequent image.  This paper propose a more objective method; the comparison of image histograms whereby a leftward shift of the histogram indicates a positive effect of treatment.  The comparison of two histograms is equivalent to either comparing the corresponding box-plots or the corresponding set of percentiles.  However, before the comparison is made the images need to be registered and resized.  The results of this study show that the proportion of percentiles (from histogram) can be used as an indicator of treatment effect (or patient’s progress).  Further the correlations   are shown to be the best similarity measure to indicate the quality of image registration.  Finally, this study also shows that a combination of registration and resizing can improve the pair-wise comparison.


Image Retrieval: Content- and Semantics-based Approach

Dr. Mohammad Faizal Ahmad Fauzi, Multimedia University.

Date: 3rd April 2007

Venue: Universiti Teknologi Malaysia, KL

Abstract: In conventional image retrieval systems, images are indexed by text, known as the metadata of the image, such as the file name, the date it was produced, the type of the image and a manually annotated description on the content of the image itself. This kind of system, known as text-based image retrieval (TBIR), suffers from some weaknesses, namely the amount of labour required to manually annotate every single images, as well as the difference in human perception when describing the images, which might lead to inaccuracies during the retrieval process later. Hence, there is a need for a better system and content-based image retrieval (CBIR), where the images are described automatically based on the characteristics of their visual content is a popular choice. In a CBIR system, the image description is done automatically, and is also consistent, which in theory will solve the two drawbacks of TBIR system. However CBIR does have its disadvantages, one of which is its inability to provide the semantics or the meaning of the images, popularly known as the semantic gap. This brings the semantics-based image retrieval (SBIR) into the picture. In SBIR, the main goal is to obtain the semantics of the images, by means of automatic image annotation, before they are used as keywords for retrieval purpose. The SBIR and TBIR hence use the same approach to image retrieval, with the difference being TBIR needs human assistance while SBIR is fully computer generated, which, like CBIR, should solve the two drawbacks of the TBIR system.

Last Updated (Monday, 13 May 2013 07:51)