ISSN: 1998-0140


Year 2007

All papers of the journal were peer reviewed by two independent reviewers. Acceptance was granted when both reviewers' recommendations were positive.

Main Page

    Paper Title, Authors, Abstract (Issue 1, Volume 1, 2007)


Analysis of Dynamic Characteristics of a Minimal-Time Circuit Optimization Process
A. M. Zemliak

Abstract: The design process for analog network design is formulated as a dynamic controllable system. A special control vector is defined to redistribute the compute expensive between a network analysis and a parametric optimization. This redistribution permits the minimization of a computer time. The problem of the minimal-time network design can be formulated in this case as a classical problem of the optimal control for some functional minimization. This approach generalizes the design problem and generates an infinite number of the different design strategies inside the same optimization procedure. By this methodology the aim of the system design process with minimal computer time is presented as a transition process of a dynamic system that has the minimal transition time. The conception of the Lyapunov function of dynamic controllable system is used to analyze the principal characteristics of the design process. The different forms of the Lyapunov function were proposed to analyze the behavior of a design process. The special function that is a combination of the Lyapunov function and its time derivative was proposed to compare the different design strategies and to predict the strategy with the minimal computer design time.


Limit Temperatures, Spinodal Decomposition and Isospin in Heavy Ion Collisions
Armando Barranon, Jorge Alberto Lopez Gallardo

Abstract: Spinodal decomposition signatures have been used to obtain limit temperatures for several Heavy Ion collisions. Meanwhile isospin changes, these transient temperatures remain approximately constant with a fluctuation of about 1 MeV and in the range of 10MeV. Also, a primitive breakup with equal sized fragments of a privileged fragment size equal to 6 and an excitation of 4.75MeV were found using higher order charge  correlations. These transient temperatures are in the range of theoretical and experimental studies reported elsewhere and confirm the role of spinodal decomposition in the critical behavior of nuclear matter.


Latin language and new educational technologies
P. Camastra, P. Fedeli, M.R. Grattagliano

Abstract: This paper presents a project on revival of the Latin language study in Europe. The complete methodological and didactical renewal of a basic Course of Latin is addressed, through new e-learning technologies and methods promoted by the Interfaculty Centre ?Rete Puglia? in the University of Bari. The obtained results in terms of experimentation benefits are presented.


Computationally Efficient Analytical Crosstalk Noise Model in RC Interconnects
P.Chandrasekhar and Rameshwar Rao

Abstract: This paper presents an accurate, fast and simple closed form solution toestimate crosstalk noise between two adjacent wires, using RC interconnect model in two situations: simple resistance as driver and short channel CMOS inverter as a driver. The salient features of our proposed models include minimization of computational overhead, elimination of adjustment step to predict the peak amplitude and pulse width of the noise waveform. Numerical calculations are compared with SPICE simulation and other metrics by plotting the noise voltage verses time. Based on our proposed models, we derive analytical delay models due to RC interconnect in each case. Finally we formulate energy dissipation of the RC coupled interconnects in both the cases using our proposed metrics. Experimental results indicate that our models are closely comparable.


A Deflected Grid-based Algorithm for Clustering Analysis
Nancy P. Lin, Chung-I Chang, Nien-Yi Jan, Hung-Jen Chen, Wei-Hua Hao

Abstract: The grid-based clustering algorithm, which partitions the data space into a finite number of cells to form a grid structure and then performs all clustering operations on this obtained grid structure, is an efficient clustering algorithm, but its effect is seriously influenced by the size of the cells. To cluster efficiently and simultaneously, to reduce the influences of the size of the cells, a new grid-based clustering algorithm, called DGD, is proposed in this paper. The main idea of DGD algorithm is to deflect the original grid structure in each dimension of the data space after the clusters generated from this original structure have been obtained. The deflected grid structure can be considered a dynamic adjustment of the size of the original cells, and thus, the clusters generated from this deflected grid structure can be used to revise the originally obtained clusters. The experimental results verify that, indeed, the effect of DGD algorithm is less influenced by the size of the cells than other grid-based ones.


        Paper Title, Authors, Abstract (Issue 2, Volume 1, 2007)


About an adapted image compression algorithm for surveillance flying systems
Ciprian Racuciu, Nicolae Jula, Florin-Marius Pop

Abstract: Flying surveillance systems have become a priority in every modern army. The prove is the investments involved that are growing every year. A special category is the small systems known as UAV (Unmanned Air Vehicle) that embed only the highest technology because of the dimension limits imposed. The goal of this paper is to present an image compression module and its algorithm based on Discrete Cosines Transform that can be used on a UAV for a real-time transmission of the images captured to the ground. This paper focuses on low complexity techniques used in image compression that are combined to develop a full image compression algorithm that use low resources but with high compression ratio, flexibility for developing further options or characteristics and a medium quality. All the parameters can be changed for different requirements.


Application of the Hotelling's and F statistical for the determination of defects in the dental enamel
Cortez Jose Italo, Gonzalez Flores Marcos, Perea Gonzalez Gloria Patricia, Vega Galina Victor Javier, Cortez Liliana, Cortez Ernest Italovich.

Abstract: The present work presents the experiment that was made to verify and to corroborate the physical changes in the dental enamel. The data were obtained in voltage terms in four stages: without treatment, dental paste, acid engraver and adhesive. The covariance matrices of the samples were then compared with each other to obtain an estimation of every pair of matrices. The inverse estimated matrix was obtained, and finally the Hottelling's statistical was calculated for the multivaried case. We show that the selected treatments have differences in their averages. This allows us to conclude that there are physical changes in the dental enamel.


Influence of Water Scale on Thermal Flow Losses of Domestic Appliances
D. Dobersek, D. Goricanec

Abstract: Research results of how the precipitated water scale on heaters of small domestic appliances influences the consumption of electricity are presented. It shows that the majority of water scale samples are composed of aragonite, calcite and dolomite and that those components have an extraordinary low thermal conductivity. Also, the results show that at 2 mm thick deposit, depending on the chemical composition of water scale, the thermal flow is reduced for 10% to 40%; consequently, the consumption of electricity
significantly increases.


Simulation of Multi-product Pipelines
Drago Matko, Sa?so Bla?zi?c, and Gerhard Geiger

Abstract: The problem of modeling and simulating pipelines that are used for transporting different fluids is addressed in the paper. The problem is solved by including fluid density in the model beside pressure and velocity of the medium. First, the system of nonlinear partial differential equations is derived. Then, the obtained model is linearised and transformed into the transfer function form with three inputs and three outputs.
Admittance form of model description is presented in the paper. Since the transfer function is transcendent, it cannot be simulated using classical tools. Rational transfer function approximation of the model were used and validated on the real industrial pipeline. It was also compared to the model that does not take the changes in fluid density into account. The latter model cannot cope with batch changes whereas the proposed one can.


Comparison of regression models based on nonparametric estimation techniques: Prediction of GDP in Turkey
Dursun Ayd?n

Abstract: In this study, it has been discussed the comparision of nonparametric models based on prediction of GDP (Domestic Product) per capita prediction in Turkey. It has been considered two alternative situations due to seasonal effects. In the first case, it is discussed a semi-parametric model where parametric component is dummy variable for the seasonality. Smoothing spline and regression spline methods have been used for prediction of the semi-parametric models. In the second case, it is considered the seasonal component to be a smooth function of time, and therefore, the model falls within the class of additive models. The results obtained by semiparametric regression models are compared to those obtained by additive nonparametric.


Semiautomatic generation of database in finite element programs
Daniela P. Cârstea

Abstract: A pre-processor for the generation of the mesh 2D in a CAD product based on the finite element method (FEM) is presented. Our software product is based on the multiblock method and is implemented in C language. A friendly interface user-program is included and some communication languages are available in the communication protocol. In our software product the database consists in a set of files (text or binary files). These files contain both geometrical data of the elements and physical properties (field sources, material properties, boundary conditions etc). The database can be used by well known software products for graphics processing and post processing stages of a finite element program. We present some aspects of parallel implementation of the preprocessor. In our approach a coarse mesh is generated as the starting point of the parallel mesh generation. The domain to be meshed is decomposed into a number of sub-domains. This decomposition is guided by physical considerations.


Intelligent processes for defect identification
Edson Pacheco Paladini

Abstract: This paper describes a knowledge-based system and other classical artificial intelligent techniques developed to identify imperfections or defects in industrial products. The defects we are studying used to appear on the piece external area (like spots, fractures, scratches, dark or white lines). The application of the system has been developed in wall or floor tiles factories and it has been showing itself adequate to its finality, as show its application results. The system works, basically, with codified information from the wall or floor tile faces. The piece of information is accessed by special devices which pick up the image and transform it in an array of numbers and codes. Therefore, the system behavior can be defined by these information pieces. Initially the system detects the existence of imperfections using a first group of computational programs; after that, s second group of programs defines the gravity level of each detected defect (for instance: if it implies to reject the piece). Finally, a third group of programs (the identification system) informs to its users what is the most probable kind of imperfection detected (defect identification). We show here the general ideas of the identification system and the structure and some results, what can be seen as a useful and interesting application of knowledge-based systems to quality control area.


Multi-market bidding strategy of power suppliers in China
Xingping Zhang, Runlian Wu, Ling Chen

Abstract: The 5 power generation groups of China have a feature of monopoly, so we apply a non-basic auction theory to make bidding rules in order to reduce the market power of power suppliers. Using non-basic auction theory, order statistics, and Monte Carlo simulation, we calculate the bidding probabilities in different cases. Taking into account bidding probabilities and the revenue, we put forward the bidding strategy models of the spot market and the spinning reserve market, which can be applied to calculate the power volume and bid in the multi-market so as to maximize profit and minimize the bidding risk. Enough spinning reserve volume is important to the security and reliability of the electrical system, the bidding strategy of spinning reserve market prompts power suppliers to bid, because they can get a rational profit and reduce the bidding risk from the spinning reserve market. An example supports the validity of the multi-market bidding model.


Dichotomy method in testing-based fault localization
Sun Ji-Rong, Ni Jian-Cheng, and Li Bao-Lin

Abstract: In practice, testing-based fault localization (TBFL), which uses test information to locate faults, has become a research focus in recent years. Dichotomy method is presented to perform on TBFL. First we optimize the test information itself from three aspects: searching scope localization using slice technique, redundant test case removal, and test suite reduction with nearest series. Secondly, the diagnosis matrix is set up according to the optimized test information and each code in failed slice is prioritized accordingly. Thirdly, the dichotomy method is iteratively applied to an interactive process for seeking the bug: the searching scope is cut in two by the checking point cp, which is of highest priority in searching scope; If cp is wrong, the bug is found; else we will ignore the codes before/after it according to the result of cp. Finally, we conduct three studies with Siemens suite of 132 program mutants. Our method scores 0.85 on average, which means we only need to check less than 15% of the program before finding out the bug.


Knowledge Uncertainty and Composed Classifier
Dana. Klimešová, Eva Ocelíková

Abstract: The paper deals with the relations between knowledge management, uncertainty and the context evaluation on the background of computer science, artificial intelligence and the new possibilities of information technologies that can help us to carry out the knowledge management strategies. The paper discuss the problem of wide context (temporal, spatial, local, objective, attribute oriented, relation oriented) as a tool to compensate and to decrease the uncertainty of data, classification and analytical process at all process to increase the information value of decision support. The contribution deals with a problem of creating the composed classifier with boosting architecture, whose components are composed of classifiers working with k - NN algorithm (k - th nearest neighbour).


Study of algorithms for decomposition of a numerical semigroup
Branco, M. B. and Franco, Nuno

Abstract: We study two algorithms to decompose a numerical semigroup S as intersection of irreducible numerical semigroups. We also present a compared study of two algorithms to compute the intersection of two numerical semigroups with embedding dimension two and the same multiplicity.


Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles
M. Fernanda P. Costa and Edite M. G. P. Fernandes

Abstract: This paper presents a performance evaluation of three sets of modifications that can be incorporated into the primal-dual interior point filter line search method for nonlinear programming herein illustrated. In this framework, each entry in the filter relies on three components, the feasibility, the centrality and the optimality, that are present in the first-order optimality conditions. The modifications are concerned with an acceptance condition, a barrier parameter update formula and a set of initial approximations to the dual variables. Performance profiles are plotted to compare the obtained numerical results using the number of iterations and the number of the optimality measure evaluations.


    Paper Title, Authors, Abstract (Issue 3, Volume 1, 2007)


Mathematical Model for the transmission of Plasmodium Vivax Malaria
Puntani Pongsumpun and I-Ming Tang

Abstract: Plasmodium vivax malaria differs from P. falciparum malaria in that a person suffering from P. vivax malaria can experience relapses of the disease. Between the relapses, the malaria parasite will remain dormant in the liver of the patient, leading to the patient being classified as being in the dormant class. A mathematical model for the transmission of P. vivax is developed in which the human population is divided into four classes, the susceptible, the infected, the dormant and the recovered. Two stable equilibrium states, a disease free state E0 and an endemic state E1, are found to be possible. It is found that the E0 state is stable when a newly defined basic reproduction number R0 is less than one. If R0 is more than one then endemic state E1 is stable. The conditions for the second equilibrium state E1 to be a stable spiral node are established. It is found that solutions in phase space are trajectories spiraling into the endemic state. The different behaviors of our numerical results are shown for the different values of parameters.


Influence of incubation of virus for the transmission of dengue disease
Puntani Pongsumpun

Abstract: The transmission of dengue disease is studied through mathematical model. This disease is transmitted between two people by biting of infectious Aedes aegypti mosquitoes. After infected with dengue virus, both human and vector populations become to be infected class before to be infectious class. Only infectious class can transmit dengue virus to susceptible class. The original SIR(Susceptible-Infectious-Recovered) model can not describe the difference between infected and infectious classes. Thus the modified model is considered in this study. This model is formulated by separating the human population into susceptible, infected, infectious and recovered classes. The vector population is divided into susceptible, infected and infectious classes. The dynamical analysis method is used for analyzing this modified model. We confirm these results by using numerical results. We found that the infected class decreases the periods of oscillations in the population.


Model for the transmission of dengue disease in pregnant and non-pregnant patients
Puntani Pongsumpun and Rujira Kongnuy

Abstract: Recently, there has been a notable increase in dengue fever and dengue hemorrhagic fever cases in both the very young and in aged adults. Dengue pregnant women had been increasingly reported. Many infants have severe and may suffer from complications and even death because of difficulties in early diagnosis and improper management. In this study, we present the mathematical model for describing the transmission of dengue disease in pregnant and non-pregnant humans. The different transmission probabilities of dengue disease to pregnancy and nonpregnancy are considered. We analyze our model by dynamical analysis method. The numerical simulations are shown to confirm our results. The basic reproductive rate of the disease is discussed.


Hypersonic Flow Interaction of Pitched Plates on Blunted Cone at Incidence
Salimuddin Zahir and Zhengyin Ye

Abstract: High speed flow interactions for short protuberances installed on a standard blunt cone configuration were studied, aerodynamic effects were found analogous to lateral jet-interactions for Mach 3.5 to 9.7 on a conic geometry at incidence. Static aerodynamic coefficients, axial and lateral pressure distributions were determined using CFD tools for flow interaction effects of pitched short protuberance geometries of cylindrical cross-section. It has been further established that pitched short protuberance fixed on a blunted cone, causes an increase in normal force through altered pressure distribution, with a consequent development of an aerodynamic pitching moment, forward deflection of the protuberance was found to be more effective in comparison with an aft inclination, while similarity in predicted pressure distribution using CFD analysis with an overall prediction accuracy of ± 8% was found with the experimental results in the hypersonic range.


Decomposition filters for multi-exponential and related signals
Vairis Shtrauss

Abstract: Decomposition of multi-exponential and related signals is generalized as a filtering problem on a logarithmic time or frequency scale and finite impulse response (FIR) filters operating with logarithmically sampled data are proposed to use for its implementation. The filter types and algorithms are found for various time-domain and frequency-domain mono-components. It is established that the ill-posedness of the multi-component decomposition manifests as high sampling-rate dependent noise amplification coefficients. A regularization method is proposed based on noise transformation control by choosing an optimum sampling rate. Algorithm design is suggested integrating together the signal acquisition, the regularization and the discrete-time filter implementation. As an example, the decomposition of a frequencydomain multi-component signal is considered by a designed discretetime filter.


Sampling-Reconstruction Procedure of Markov Chains with Continuous Time
V. Kazakov and Y. Goritskiy

Abstract: At the first time the statistical description of the Sampling-Reconstruction Procedure of Markov Chains with continuous time and with an arbitrary number of states is given. The analytic expression for the conditional probability density of the jump time moment is obtained. On the basis of this probability density function the expression for the jump moment estimation is found. The methodology of the sampling interval choice is suggested. One illustrative example is considered.


On some interpolation problems in polynomial spaces with generalized degree
Dana Simian, Corina Simian

Abstract: The aim of this paper is to study many interpolation problems in the space of polynomials of w-degree n. In order to do this, some new results concerning the polynomial spaces of w-degree are given. In this article, we consider only the case of functions in two variables. More details are obtained for the weight w = (1;w2). We found a set of conditions for which, ¦n;w, the space of polynomials of w-degree n is an interpolation space.


Algorithmic skeletons for numerical simulation of coupled problems
Ion T. Cârstea

Abstract: This paper presents some theoretical and numerical problems that arise in the analysis of coupled electromagneticthermal problems in electromagnetic devices. The principal objective of the paper is to describe some computational aspects for coupled electromagnetic and thermal fields in the context of the finite element method, with emphasis on the reduction of the computing resources. We present coupled models for magnetic field and thermal field. The mathematical model for magnetic field is based on time-harmonic Maxwell equations in vector magnetic potential formulation for axisymmetric fields. The model for the heat transfer is the heat conduction equation. We propose simplified numerical models for coupled fields in electromagnetic devices with target examples on the induction heating devices and high-voltage and large power cables. Domain decomposition is presented in the context of the coupled fields. The analysis domain is divided into two overlapping subdomains for the two coupled-fields considering physical significance of the pseudoboundary of the two subdomains.


Distributed and Parallel Computing in MADM Domain Using the OPTCHOICE Software
Cornel Resteanu and Marin Andreica

Abstract: The paper presents a method for solving the general Multi-Attribute Decision Making problems, by distributed and parallel computing, with the OPTCHOICE software. One presents the scheduling and load balancing algorithm for concurrent solving of problems sets on a given number of parallel computers. An analysis on the construction of such a problem is made; in this way, a decomposition tree having the decision-makers on the first level, the states of nature on the second level, and the attributes of the problem on the third level is emphasized. Corroborated with the analysis of the problem’s data, the above results conduct at the conviction that a parallel algorithm for solving the general problem, starting from a particular problem, is possible. At each tree’s level one can state independent particular sub-problems that are solved in parallel, the sub-problems at a superior level waiting for the solutions of the subproblems at the current level. Finally, the classical TOPSIS method is presented running in the parallel and multi-level context.


Classification of the Insurance sector with logistic regression
Bahadtin Ruzgar and Nursel Selver Ruzgar

Abstract: In statistical case studies where categorical results such as “successful-unsuccessful”, “ill-not ill” and “good-fair-bad” are obtained as a result of evaluation of data, the logistic regression is a rather suitable statistical method. In this study, the data for the years 2004, 2005 and 2006 from 53 companies that are active in the insurance sector in Turkey were evaluated by using logistic regression method. However, since the data were not sufficient for all the insurance companies, twelve insurance companies were eliminated from the evaluation. Forty-one companies used for the analysis were divided into two groups depending on their activity area. Seventeen companies were evaluated by using the data on individual accident, health and life branches; and twenty-four companies were evaluated by using data on fire, transportation, engineering, agriculture, all-risks, obligatory traffic, obligatory highway transportation, individual accident and other accident and health branches. The success ranking of companies is made as companies in the first 10 and companies between 11 and 20. Whether such classification of 41 companies collides with the classification of “successful” and “unsuccessful” companies according to geometrical mean and median was determined with a comparison. The first sixmonth data of 2006 year were used for control and the classification obtained from models was compared to real classification of companies.


Mathematical Modeling of Multiple Intelligence Theory with Bayesian Theorem
Bahadtin Ruzgar and Nursel Selver Ruzgar

Abstract: In this work, the multiple intelligence theory proposed by Gardner, a professor of education at Harvard University, is modeled by Bayesian Theorem under two hypotheses. Howard Gardner initially formulated a list of seven intelligences, and then added two more. As a different approach, if set theory for multiple intelligences is used, the structure of multiple intelligences to set theory under four properties of intelligence algebra can be generalized. Assuming that the number of intelligences increases, Boolean algebra in set theory can be applicable. Bayesian theorem with application of conditional probability generates a good structure for multiple intelligences. Bayesian Theorem was applied to two hypotheses; mutual intersections of n intelligences are empty and non-empty sets, and using conditional probability, it can be shown that multiple intelligences and Bayesian Theorem are in good harmony or multiple intelligences can be explained by Bayesian theorem.


Some algorithms for generating receipts in the cutting-covering problem
Paul Iacob, Daniela Marinescu, and Kinga Kiss-Iakab

Abstract: We consider a cutting-covering problem, defined by us in previous papers, the problem of covering a rectangular support with rectangular pieces cut from a roll. We first prove that our algorithm for the rectangular cutting-covering problem without losses is not optimal. Starting from a decomposition of a natural number in sums of naturals we developed an algorithm for a better solution for the rectangular cutting-covering problem. Now we continue with the modeling of the cutting-covering problem like an integer linear optimization problem. For solving this problem we can use a branchand- bound algorithm. Because, this algorithm has a high time complexity, we construct other algorithm, which generates receipts for the cutting-covering problem with losses, but much faster.


Some Topological Properties of Semi-Dynamical Systems
M. H. Anvari and M. Atapour

Abstract: Recently there has been an extensive study on Relative Semi-Dynamical Systems (RSD-Systems). In this paper, we explore some topological properties of RSD-systems. Here, in particular, minimal RSD-systems are characterized and transitive homeomorphisms are investigated. Moreover, -level relative topological entropy is extended to RSD-systems. Finally, as a computational example, we develop an RSD-system over the polynomial function space R[x] based on the derivative operator; we also calculate -level relative topological entropy for this system.


Application of Resistivity Data in Optimizing Fracture Network Model: A Mathematical Approach
Nam H. Tran, Amna Ali and Abdul Ravoof

Abstract: Seismic is widely considered as the most important data source in the energy engineering. It provides critical information for mapping and characterization of oil, gas, condensate, geothermal and coal bed methane reservoirs. In fractured media, however, its application is limited. This paper presents a mathematical model in which the relationships between seismic’s P-wave / S-wave velocity and resistivity of reservoir rock and fluid are studied. To account for variances between the fractured and conventional pore-matrix media, primary and secondary rock and fluid properties integrally examined, including formation factor, primary porosity, secondary porosity, tortuosity, cementation exponent, partitioning coefficient, crystallisation and mineralisation. The relationship, being novel in fracture heterogeneity, is validated by data from a producing naturally fractured gas reservoir. It opens up several options to utilise new technologies in petroleum exploration: electromagnetic survey, magneto-telluric data, artificial neural network technique.


A First Approach to Self-learning Statistics Activities at the UPC
M.I. Ortego and J. Gibergans-Báguena

Abstract: The European institutions of higher education have undertaken one of the most important educational reform movements in history. It represents an opportunity for renovation and improvement which will require profound reflection, perseverance and a common effort on the part of all those involved in higher education. The concepts and strategies defined in the Bologna Process to develop a European Higher Education Area (EHEA), involve a change in the educative programs. This change has to be adapted to innovative teaching and learning processes based on achieving specific knowledge according to the degree, and based on developing abilities and skills to adapt that knowledge to the professional field of work. Thus, the method has to be focused in the learning process (based in the student and his capability to learn) and not in the teaching process (based in the teacher work). In this paper, we will describe several experiences applied at the Universitat Politècnica de Catalunya (UPC). These experiences are based in the adaptation of the educational plans of Statistics subjects. Professors should have a new role, as guides in the students' learning process. Attention is focused in the increase of autonomous work that students will have to do using the “Self-Learning Activities” (SLA).


Comparison of Computation Algorithm for Three-Phase Voltage Flicker Equivalent Value
Shu-Chen Wang, Yu-Jen Chen, and Chi-Jui Wu

Abstract: Four simple but effective computation algorithms have been compared to calculate the three-phase voltage flicker equivalent values. Owing to violent and stochastic fluctuation in different phases of three-phase circuits, different voltage flicker components may exist in different phases. Traditionally, the flicker components in each phase should be calculated separately. And the averages of three single-phase values are given to be the three-phase equivalent values. However, in this paper, it wants to investigate fast computation algorithms to calculate directly the three-phase equivalent values. After the three-phase voltage waveforms are recorded, the voltage flicker equivalent components are obtained from the voltage envelopes constructed by the RMS values or instantaneous voltage vectors. The effects of jump sampling, harmonic, and power frequency shifting are examined. Some given waveforms and field-measured waveforms are adopted to reveal the advantages of those methods. From the study results, the method by using the instantaneous voltage vectors are more simple and effective to obtain the three-phase voltage flicker equivalent values.


Mathematical Modelling and Numerical Simulation of Fluid-Magnetic Particle Flow in a Small Vessel
Benchawan Wiwatanapataphee, Kittisak Chayantrakom, and Yong-Hong Wu

Abstract: Fluid-solid flow phenomena is an interdisciplinary research area with great technological, commercial and medical importance. One particular application is related to the drug delivery system in which magnetic targeting offers the ability to target a specific site, such as a tumor. This paper presents a mathematical model and a finite element method, based on the Arbitrary Lagrangian Eulerian approach, for studying blood-magnetic particle flow in small vessels. Four models with one, three, five, and nine particles are used to analyze the flow pattern and pressure distribution along the flow direction. Effects of magnetic force on the blood-particle flow are investigated.


    Paper Title, Authors, Abstract (Issue 4, Volume 1, 2007)


Adaptive track-keeping control of underwater robotic vehicle
Jerzy Garus

Abstract: The paper describes a method of control of the underwater robotic vehicle to the problem of tracking of a reference trajectory. A multidimensional non-linear model expresses the robot’s dynamics. Command signals are generated by an autopilot consisting of four independent controllers with a parameter adaptation law implemented. A quality of control is concerned without and in presence of environmental disturbances. Selected results of computer simulations illustrating effectiveness and robustness of the proposed control system are inserted.


A novel hardware-software co-design for automatic white balance
Chin-Hsing Chen, Sun-Yen Tan, and Wen-Tzeng Huang

Abstract: As electronic techniques is continuous improved rapidly cameras or video camcorders used for image retrieval technology and development become digitalized. The color of the photographs would look very different due to differences in light projection illumination when we take a picture. Human eyes are able to automatically adjust the color when the illuminations of the light source vary. However, the most frequently used image sensor, charge coupled device, CCD device can not correct the color as human eyes. This paper presents a hardware-software co-design method based on Lam’s automatic white balance algorithm, which combines both Gray World Assumption and Perfect Reflector Assumption algorithms. The execution steps of Lam’s algorithm were divided into three stages. The hardware-software co-design and analysis for each stage was realized. Three factors including processing time, Slices and DSP48s of hardware resources were used to formulate the objective function, which was employed to evaluate the system performance and hardware resource cost. Experimental results shows suitable partitions of hardware-software co-designs were achieved. An embedded processor, MicroBlaze developed by Xilinx and a floating point processor were used to deal with the software part of the algorithm. The hardware part of the algorithm was implemented using an IP-based method. It is able to reduce the memory and CPU resources of PC as well as to have the properties of easy modification and function expansion by using such system on programmable chip architecture.


An Asphalt Emulsion Modified by Compound of Epoxy Resin and Styrene-Butadiene Rubber Emulsion
Zhang Ronghui , He Yuanhang

Abstract: A modified asphalt emulsion with superior performances will be produced after compound of waterborne epoxy resin and styrene-butadiene rubber are mixed in emulsified asphalt. This paper describes the method and technique for preparation of the material as well as the test and research on aspects like adhesion, various performances of evaporation residues and durability, and the results from which reveal that this modified asphalt emulsion shows road performances and indexes better than those of ordinary asphalt emulsion and asphalt emulsion modified by styrene-butadiene rubber latex and will find application in engineering.


Image analysis of electrorheological flow patterns
Petr Ponížil, Vladimír Pavlínek, Takeshi Kitano and Tomáš Dřímal

Abstract: A method of image analysis of flow patterns, which are developed in electrorheological fluids, is presented. Due to the process of preparation, electrorheological samples show a radial symmetry. Numerical transformations are necessary to remove sample deformation and obtain correct radial dependence of image intensity as the function characterizing the sample image


Uniformly Ultimate Boundedness Control for Switched Linear Systems with Parameter Uncertainties
Liguo Zhang, Yangzhou Chen and Nikos E. Mastorakis

Abstract: This paper presents uniformly ultimate boundedness (UUB) control design for switched linear systems with parametric uncertainties. Only the possible bound of the uncertainty is needed. Under arbitrary switching laws, a continuous state feedback control scheme is proposed in order to guarantee uniformly ultimate boundedness of every system response within an arbitrary small neighborhood of the zero state. The design techniques are based on common Lyapunov functions and Lyapunov minimax approach.


Probability of Failure on Demand for Systems with Partial Stroke Test
J. Borcsok, P. Holub and D. Machmur

Abstract: The average Probability of Failure on Demand (PFD) considering the Proof Test interval is one possibility to compare different safety-related systems. In this paper we intend to derive the average PFD for a 1oo1 system taking into account the Proof Test as well as the Partial Stroke Test (PST). Thereby we will specify a unique mathematic function without any help of a probability band. Doing so, we get, on the one hand, additional correlations between the reduction of PFD and the diagnostic coverage factor, and on the other hand, between the PFD value of a system without PST and a system with PST. Finally we will present an approximation in order to calculate the PFD value, if the ratio between the PST interval and the Proof Test interval is very small.


System Dynamics Simulation: an Application to Regional Logistics Policy Making
Alberto De Marco, and Carlo Rafele

Abstract: The fast-pace development of trades with the Far East is giving the Mediterranean Sea the chance of becoming a major logistics hub. In the Mediterranean-front E.U. regions, public and private investments are aimed at this opportunity by integrating transportation networks, sea ports, and inland logistics platforms. With specific regard to the North-West of Italy, a model based on System Dynamics has been simulated to help decision and policy makers in the task of planning and directing the investment effort. The model provides impact analysis of freight traffic flow trends in the region on the medium and long-term, as a result of the interaction between exogenous variables and different case-scenarios for road and railroad infrastructure investments.


Galerkin finite volume solution of heat generation and transfer in RCC dams
S.R. Sabbagh-Yazdi, N.E. Mastorakis, and A.R. Bagheri

Abstract: Galekin Finite Volume solution of the temperature field on unstructured finite volumes is introduced. In this software, the transient PDE for heat transfer in solid media is coupled to a suitable concrete heat generation algebraic relation. The discrete form of the heat transfer equations is derived by multiplying the governing equation by a piece wise linear test function and integrating over a sub-domain around any computational node. The solution domain is divided into hybrid structured/unstructured triangular elements. The triangular elements in the structured part of the mesh can be activated for simulating gradual movement of the top boundary domain due to advancing the concrete lifts. The accuracy of the developed model is assessed by comparison of the results with available analytical solutions and experimental measurements of two-dimensional heat generation and transfer in a square domain. Then, the computer model is utilized to simulate the transient temperature field in a typical RCC dam section.


Implementation of a Numerical Method for the Stability Analysis of Asynchronous Motors Operating at Variable Frequency
Sorin Enache, Aurel Campeanu, Ion Vlad, Monica Adela Enache

Abstract: This paper presents the way of implementation of a numerical method for the stability analysis of a system driven by an asynchronous motor. The simulations, the experimental results and the obtained conclusions are detailed.


A Modal Logic Approach to Decision Process Petri Nets
Julio B. Clempner and Jesus Medel

Abstract: In this paper we introduce a new modeling paradigm for developing decision process representation associating to any Decision Process Petri net (DPPN) a Kripke structure (KS). The principal characteristics of this model is its ability to represent and analyze the shortest-path properties of a decision process. In this sense, we use a Lypunov-like function as a statevalue function for path planning, obtaining as a result new characterizations for final decision points. We show that the dynamics of the DPPN can be captured by a KS and, some dynamic properties of a DPPN can be stated in temporal logics. The temporal logic is constructed according to the Lypunovlike function syntax and semantics. Moreover, we consider some results and discuss possible directions for further research.


Numerical Modeling and Experimental Validation of a Turbulent Separated Reattached Flow
Florin Popescu, Tănase Panait

Abstract: An experimental study was conducted to analyse the field velocity of a fully developed turbulent incompressible flow behind a backward-facing step with a curved nose shape. The laser Doppler anemometry was used as measurement technique. The Reynolds number, Re, based on the step height, h, and the maximum velocity U0max of the velocity distribution at the inlet, was 84000. A Fluent simulation of the flow for the same geometrical and flow conditions as the experimental ones was performed. The resulted velocity fields of the numerical simulation and of the experimental study were compared and analysed. Both the numerical and experimental results shows the existence of four interacting zones: separated free shear layer, the recirculating region under the shear layer, the reattachment region and the attached/recovery region.


Accuracy Assessment and Application of 3D Galerkin Finite Volume Explicit Solver for Seepage and Uplift in Dam Foundation
S.R. Sabbagh-Yazdi, N.E. Mastorakis, and B. Bayat

Abstract: In this paper, development of a Galerkin finite volume three-dimensional seepage solver on mesh of tetrahedral is described. The numerical analyzer is utilized for solving the seepage in porous media and uplift under gravity dams with upstream cut off wall. The results of numerical solver in terms of uplift pressure in natural foundation of a gravity dam with upstream cut off wall are compared with analytical solutions obtained by application of conformal mapping technique for a constant unit ratio of foundation depth over half of dam base (T/b =1). The accuracy of the results computed uplift pressure for homogeneous and isotropic condition present acceptable agreements with the analytical solutions for various ratios of cut off wall depth over half of dam base (s/b). Having assessed the accuracy of the model, it is applied to evaluate the quality of the results of the common empirical relations for uplift pressure estimation. In order to present the ability of the verified model to cop with real world problems, it is applied to solve seepage through a natural porous foundation of a gravity dam with three incline layers with different coefficients of permeability.


Modeling lane-changing behavior based on queue length at an urban signalized intersection
Amiruddin Ismail, Shahrum Abdullah, Azami Zaharim, and Ibrahim Ahmad

Abstract: This research aims to study and develop models for driver’s lane-changing behavior in urban areas using logistic regression method. Initially a pilot study was conducted using a videotape recording technique to film an approach road leading to a signalized intersection in an urban road during the morning off-peak period. Inter related coding methods were designed to described and verify the driver’s lane-changing maneuver. Later more video-taping studies were done to develop part of the questionnaires. A questionnaires study to analyze the driver’s background, experience, attitudes, lane-changing practices and their driving behavior on the road was carried out in order to develop lane-changing behavior models using the logistic regression method. 14 models of lane and non-lane changing were developed and validated statistically. The statistical validations were based on parameters such as Omnibus test of model coefficients, - 2 Log likelihood, Cox and Snell R square, Nagelkerke R square, Hosmer and Lemeshow test, Chi-square, classification table, standard error, wald statistic, degree of freedom, test for significance, odds ratio and histogram of estimated behavioral probabilities.


Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions
Sung-Hyuk Cha

Abstract: Distance or similarity measures are essential to solve many pattern recognition problems such as classification, clustering, and retrieval problems. Various distance/similarity measures that are applicable to compare two probability density functions, pdf in short, are reviewed and categorized in both syntactic and semantic relationships. A correlation coefficient and a hierarchical clustering technique are adopted to reveal similarities among numerous distance/similarity measures.


Architecture for filtering images using Xilinx System Generator
Alba M. Sánchez G., Ricardo Alvarez G., Sully Sánchez G.; FCC and FCE BUAP

Abstract: This paper presents an architecture for filters pixel by pixel and regions filters for image processing using Xilinx System Generator (XSG). This architecture offer an alternative through a graphical user interface that conbines MATLAB, Simulink and XSG and explore important aspects concerned to hardware implementation.


Advanced Synchronization Scheme for Wideband Mobile Communications
Yumi Takizawa, Saki Yatano, and Atushi Fukasawa

Abstract: This paper describes a high performance synchronization scheme based on analog matched filters. Synchronization is the toughest problem for wideband urban mobile communications. A simplified configuration for wideband radio system was designed with advanced synchronization by matched filter technologies. A set of parallel matched filters has been composed of CMOS semiconductor technologies. New scheme has been proved to realize radio systems with simplified configurations and high performances.


Copyrighted Material,  NAUN