International Journal of Applied Mathematics and Informatics

ISSN: 2074-1278
Volume 9, 2015


Notice: As of 2014 and for the forthcoming years, the publication frequency/periodicity of NAUN Journals is adapted to the 'continuously updated' model. What this means is that instead of being separated into issues, new papers will be added on a continuous basis, allowing a more regular flow and shorter publication times. The papers will appear in reverse order, therefore the most recent one will be on top.

Main Page

Submit a paper | Submission terms | Paper format

 


Volume 9, 2015


Title of the Paper: Robust Stability Analysis for Affine Linear Plants with Time-Delay Using the Value Set Concept

 

Authors: Silvia Florida, Gerardo Romero, Ramiro Ibarra, David Lara, Irma Pérez, Aldo Méndez, Alberto Reyna

Pages: 127-130

Abstract: In this paper we present a new computational algorithm for the robust stability analysis of linear systems with time delay with affine linear uncertainty. This method evaluates a family of polynomials through slope tracing whose extreme points will form the value set. With this value set and the zero exclusion principle we can verify the robuts stability of the system.


Title of the Paper: Parameter Tuning for the Ant Colony Optimization Algorithm Used in ISR Systems

 

Authors: P. Stodola, J. Mazal, M. Podhorec

Pages: 123-126

Abstract: This paper deals with the Ant Colony Optimization (ACO) algorithm developed at University of Defence, Brno, Czech Republic. This algorithm is a metaheuristic algorithm designed for solving the Multi-Depot Vehicle Routing Problem (MDVRP). The algorithm has been integrated into the Tactical Decision Support System (TDSS) which is aimed at supporting commanders in their decision-making processes. TDSS contains several tactical models based on the MDVRP problem. This paper is aimed particularly at parameter tuning for the ACO algorithm.


Title of the Paper: Management of Data Science: How to Prevent Errors in Work with Data for Sustainable Development?

 

Authors: M. Janakova

Pages: 116-122

Abstract: This paper is focused on designing complex work with available data for improved analysis based on a four-dimensional (4D) model. The aim is a search for unexpected knowledge, error minimization for a large volume of data and better support for sustainable development. Data processing is an activity which changes data on information and knowledge. Knowledge is important for everyday decision-making and increasing a competitive advantage. A competitive advantage influences the market, but a suitable solution for global problems and rapid changes needs collaboration. Optimal collaboration helps solve existing problems. There are numerous approaches with links to extensive data and data science, but errors exist and data science must more talks of existing reality. The reason is the high complexity of the implemented activities and processes. In data science, default layers create computer science, applications, simulations, statistics, analytics and math. These layers are implemented in order to search for a solution in selected fields. This composition (layers and fields) creates the basis for a two-dimensional (2D) model. Good experience provides an extension of the 2D model about relation (dimension) for known intelligences such as Artificial, Business, Computational, Customer and Swarm in a three-dimensional (3D) model. The selection of individual layers and intelligences is based for practical reasons on adopted preferences. This selection (zoom) creates a fourth dimension for the designed model. Based on these four dimensions (layers, fields, intelligences and zooms), a 4D model is designed for a better description of existing reality. The created connection between dimensions breaks unexpected errors and supports collaboration for sustainable development in the global view.


Title of the Paper: Unranking Algorithms for Combinatorial Structures

 

Authors: X. Molinero, J. Vives

Pages: 110-115

Abstract: We present an implementation of some unlabeled and labeled unranking algorithms for the open-source algebraic combinatorics package MUPAD-COMBINAT of the computer algebra system MUPAD. We have compared our implementation with the previous versions. All our algorithms improve the previous ones with respect to the required CPU time. Moreover, we have also developed unranking algorithms applied to some unlabeled and labeled admissible operators that are not still implemented in the package MUPAD-COMBINAT. These algorithms are also able to develop some combinatorial structures useful to generate molecules applied to chemistry and influence graphs applied to game theory and social networks, among other topics.


Title of the Paper: Building Modified Modular Cryptographic Systems

 

Authors: Biyashev R., Nyssanbayeva S., Begimbayeva Ye., Magzom M.

Pages: 103-109

Abstract: This paper describes the results of the creating of modified nonconventional systems of encryption and digital signature. Cryptosystems, called nonconventional, nonpositional or modular, are based on nonpositional polynomial notations (NPNs or modular arithmetic). The development of the model of block cipher system comprises the construction of the modified nonpositional block cipher algorithm, using an analog of the Feistel scheme and a mode of application for this modified algorithm. The modification of the digital signature system is based on Digital Signature Algorithm (DSA) and NPNs. Application of the algebraic approach based on NPNs will reduce the length of the key for a digital signature without significantly lowering its cryptographic strength. The application of NPNs allows creating effective cryptographic systems of high reliability, which ensures the confidentiality, authentication and integrity of the stored and transmitted information. Computer simulation of the modified cryptosystems based on NPNs will allow developing recommendations for their use and reliable generation of complete secret keys.


Title of the Paper: Simulating User Activities for Measuring Data Request Performance of the ECM Visualization Tasks

 

Authors: Juris Rats

Pages: 96-102

Abstract: The aim of the research is to assess performance of the NoSQL database Clusterpoint when processing data requests of the large user base executing Enterprise Content Management (ECM) visualization tasks. The user activity model comprising user types, task types, data interaction types, expected preparation times and execution frequencies is defined. Visualization models relevant for the defined task types are specified. The data interaction types are matched against data request types of Clusterpoint database. The methodology for creating user activity flows for defined user base is described and used to measure Clusterpoint performance when processing the data requests generated by user activity flows. The performance tests are executed for user bases up to 30’000 users and for several database cluster configurations. The test results are analyzed and assessed.


Title of the Paper: Predicting the Responses of States to the Nuclear Proliferation Issue Using Game-Theory

 

Authors: Peter Z. Revesz

Pages: 90-95

Abstract: This paper argues that the willingness of countries or states to sign onto international treaties regarding nuclear non-proliferation and honor their former commitments is largely determined by their economic and security conditions that can be expressed by a few key parameters and whose interactions can be analyzed using game theory.


Title of the Paper: Comparison of the Accuracy of L-moments, TL-moments and Maximum Likelihood Methods of Parameter Estimation

 

Authors: Diana Bílková

Pages: 55-89

Abstract: Despite not producing good results, the method of moments is commonly applied when constructing the most appropriate parametric distribution for a given data file. An alternative approach is to use the so-called order statistics. The present paper deals with the application of order statistics (parameter estimation methods of L-moments and TL-moments) to the economic data. Theoretical advantages of L-moments over conventional moments become obvious when applied to small data sets, e.g. in hydrology, meteorology and climatology, considering extreme precipitation in particular. L-moments have been introduced as a robust alternative to classical moments of probability distributions. However, L-moments and their estimates lack some robust features specific to TL-moments, the latter representing an alternative robust version of the former, the so-called trimmed L-moments. The main aim of this paper is to apply the two methods to large data sets, comparing their parametric estimation accuracy with that of the maximum likelihood method. In this very case, the methods of L-moments and TL-moments are utilized for the construction of income and wage distribution models. Three-parameter lognormal curves represent the basic theoretical probability distribution whose parameters were estimated simultaneously by the three methods of point parameter estimation, their accuracy having been then evaluated. Income and wage distributions for the Czech Republic have been examined. The total of 168 nominal income distributions (net annual household income per capita in CZK) for the years 1992, 1996, 2002 (Microcensus survey) and 2004–2007 (EU Statistics on Income and Living Conditions survey) were analyzed, both the total income distribution for all Czech households and income distribution figures broken down into gender, historical land (Bohemia, Moravia), social group, municipality size, age and educational attainment having been studied. In addition, a total of 328 nominal wage distributions (gross monthly wage in CZK) have become the subject of the research; the total wage distribution for all CR employees as well as wage distributions in terms of gender, age, educational attainment and the classification of jobs and economic activities being examined. 2003–2010 data in the form of an interval frequency distribution were drawn from the official website of the Czech Statistical Office. The study is divided into a theoretical part, in which mathematical and statistical aspects are described, and an analytical part, where the results of the three robust parameter estimation methods are presented. For all analyzed income and wage distributions, the model distribution parameters were estimated using the methods of TL-moments, L-moments and maximum likelihood simultaneously. The accuracy of the methods employed was then compared, TL-moments having brought the most accurate, L-moments the second best and the maximum likelihood method the least accurate results in general.


Title of the Paper: Multidimensional Systems Optimization Developed from Perfect Torus Groups

 

Authors: Volodymyr V. Riznyk

Pages: 50-54

Abstract: This paper concerns the innovative techniques for improving the quality indices of two- or multidimensional systems with optimizing arranged of structural elements in spatially or temporally distributed systems (e.g. vector data coding of signals) with respect to system capabilities, transmission speed, and data redundancy. Novel design based on remarkable properties of proposed combinatorial structures, namely the concept of Ideal Vector Rings (IVR)s, which can be used for finding optimal solutions for wide classes of technological problems is proposed. These design techniques make it possible to configure multidimensional systems with fewer components than at present, while maintaining or improving on information reliability, resolving ability, and the other significant operating characteristics of the system. In particular, these results are useful for synthesis of non-uniformly spaced thinned antenna arrays with low level of side lobes. This work relates to development of new directions in fundamental and applied research in systems engineering based on idea of the Ideal Vector Rings. Identification of proposed structures with standard combinatorial configurations such as cyclic difference sets and cyclic groups is given.


Title of the Paper: Improvement of Decision Trees Based on the Quality Control of Artificial Instances of Over-Sampling

 

Authors: Hyontai Sug

Pages: 42-49

Abstract: In order to surmount the problem of neglecting minor data instances in data mining models of comprehension like decision trees or rule learners, over-sampling technique based on SMOTE was considered for validation. The quality of the artificially generated instances is validated by resorting to different and more reliable data mining algorithms other than C4.5 or RIPPER, which are the two target data mining algorithms of comprehension for improvement. On the condition that more reliable or accurate data mining algorithms are available for target data sets, they were used to check the quality of the generated over-sampled instances. The validity of the suggested idea was checked by experiment using two data sets in medicine domain, where the understandability of data mining models is important, and the experiment generated very good results.


Title of the Paper: Numerical Solution of Fredholm Integral Equations of the Second Kind by Using 2-Point Explicit Group Successive Over-Relaxation Iterative Method

 

Authors: Mohana Sundaram Muthuvalu, Elayaraja Aruchunan, Jumat Sulaiman, Mohammad Mehdi Rashidi

Pages: 33-41

Abstract: In this paper, we introduce and analyse the performance of 2-Point Explicit Group Successive Over-Relaxation (2-EGSOR) iterative method for the solution of dense linear systems that arise from second kind Fredholm integral equations. The derivation and implementation of the proposed method are described. We present results of some test examples and computational complexity analysis to illustrate the efficiency of the proposed method.


Title of the Paper: Enhanced Coordinated Checkpointing in Distributed System

 

Authors: Bakhta Meroufel, Ghalem Belalem

Pages: 23-32

Abstract: Coordinated checkpointing is a well-known method for achieving fault tolerance in distributed computing systems. This type of checkpointing selects an initiator to manage and ensure the checkpointing process. The majority of existing works ignore the role and the importance of this initiator. The work presented in this paper can be divided on two parts. In the first part, we examine the impact of initiator choice on different types of coordinated checkpointing and we prove its importance in term of performances. We propose also a simple and an effective strategy to select the best initiator each checkpointing round. In the second part of this work, we focused on the soft checkpointing and we have strengthened the role of initiator by adding a storage manager that ensures atomicity and speed of storage checkpoints files using a smart I/O strategy.


Title of the Paper: Random Number Sequences Assessment for Image Encryption

 

Authors: Antonios S. Andreatos, Apostolos P. Leros

Pages: 14-22

Abstract: The aim of this study is first to present a set of tests for assessing the quality of a set of random and pseudorandom number sequences and second, to examine their suitability for image encryption. Seven different generators are considered: the pseudorandom generators of three programming languages, a chaotic random number generator, as well as, three truly random number generators. One of the truly random number generators produces poorquality results and is used for demonstration purposes. The random number sequences were used to encrypt a test image via the bitwise XOR function. The assessment criteria used are visual tests, entropy measurements, statistical tests, image auto-correlation and cross-correlation calculations and histogram analysis. Results indicate that all but the poorquality generators examined provide satisfactory performance.


Title of the Paper: Tropical Cryptography and Analyses of New Matrix One-Way Function With Two Versions of Protection of Protection

 

Authors: Richard P. Megrelishvili

Pages: 9-13

Abstract: This article is an expanded version of Article that was published in one of the EUROPMENT Conference (in particular to St Petersburg in 2014). In this article the new results there are: Tropical Cryptography and matrix one-way function, which is the basis building a high-speed algorithm of key exchange, a prototype of which, in a sense, is the Diffie-Hellman algorithm and also are two versions of protection of this matrix one-way function. We can estimate the importance of Tropical cryptography as a new trend in cryptography, a fortiori if it will be stable with respect to the same researched algebraic methods of attack. With respect to the issue of importance of matrix one-way function, we repeat, that the main advantage of the matrix one-way function is high speed operation. Tropical cryptography opens a new direction in cryptography. It should be noted also that the stability of the matrix one-way function justified by long-standing tradition of proven algorithms of Diffie-Hellman and ElGamal. In the extended version of the paper are added the fourth and fifth sections: Matrices with an inside recursion dependence and Generation of special classes of nxn matrices.


Title of the Paper: Calculation of the Optimal Economic Costs by Enzymatic Hydrolysis of Biomaterial Waste with MAPLE

 

Authors: Hana Charvátová, Dagmar Janáčová, Vladimír Vašek, Karel Kolomazník, Rudolf Drga, Ondrej Líška

Pages: 1-8

Abstract: The paper deals with the use of mathematical software in the control and automation of real technological processes of biomaterials treatment. It describes programming tools of software MAPLE presented on the example of calculation of operating costs in processing of biomaterial waste to protein hydrolysate. For this purpose were prepared mathematical models describing the studied process as in the case of kinetic mechanism, as with the diffusion mechanism. On this basis and using the mass balance were formulated cost functions for both cases studied. The calculation was subsequently programmed in Standard Worksheet user interface and in Maplet user interface. Computed data allow to determine the optimum process to the purpose of saving energy and raw materials.