International Journal of Computers and Communications

E-ISSN: 2074-1294
Volume 11, 2017

Notice: As of 2014 and for the forthcoming years, the publication frequency/periodicity of NAUN Journals is adapted to the 'continuously updated' model. What this means is that instead of being seperated into issues, new papers will be added on a continuous basis, allowing a more regular flow and shorter publication times. The papers will appear in reverse order, therefore the most recent one will be on top.

Main Page

Submit a paper | Submission terms | Paper format

 


Volume 11, 2017 


Title of the Paper: Parallel Implementations of S-Vote Electronic Voting Verification and Tallying Processes

 

Authors: Israa A. Saadeh, Gheith A. Abandah

Pages: 106-115

Abstract: Electronic voting systems are being implemented in several countries to provide accuracy and efficiency for the electoral processes with an increased level of security. The Secure National Electronic Voting System (S-Vote) is adopted in this study for its state-of-the-art technologies, privacy, and secure processes. The S-Vote system is a homomorphic e-voting system that uses zero knowledge (ZK) proof protocol to preserve the voter’s privacy. Unfortunately, The ZK proofs’ is a complex and time consuming protocol which affects the scalability of any homomorphic e-voting system. This study investigates the parallel implementation of the S-Vote verification and tallying processes to reduce the time of vote verification checks especially the ZK proofs verification. Basically, the vote verification process consists of ZK proof, digital signature, and voter eligibility checks. I implement parallelism using java multithreaded programs for parallel program execution. It proposes three parallel implementation schemes for the vote verification and tallying processes which are task, master/slave, and data. The task parallelism spawns a separate thread to perform one of the verification process checks (tasks). The master/slave scheme spawns a thread for each voting kiosk package (client) that performs all the checks. The data parallelism scheme spawns a number of threads equal to the number of physical cores of the tallying machine. Each thread performs the whole verification process checks where the voting kiosk packages are dynamically distributed among them. The obtained results show that the data parallelism scheme is the best. It has the highest relative speedup and efficiency with lowest processing cost. It can verify and tally 64,000 ballots in about 44 minutes with 27.5 relative speedup and 86% efficiency while using 32 threads running on the multi-core tallying machine with 32 cores. The data parallelism scheme reduces ZK proof time. It has a linear speedup with respect to the number of cores and can be used to extend the use of S-Vote system for large electoral processes. For example, using a tallying machine with 128 cores can reduce the verification and tallying processes time for a country as big as Jordan from 25.4 days to 5.7 hours.


Title of the Paper: Combined Enhanced LWSE Algorithm with OTP Algorithm for Secure Database

 

Authors: Kaladhar Ganta, Bing Zhou

Pages: 100-105

Abstract: Data encryption is the process in which data is encoded using various encryption algorithms so that only authorized entities will be able to view and understand the data. Various encryption algorithms such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), Triple DES, and Light Weight Symmetric Encryption Algorithm (LWSE) are being used for the data encryption, with each algorithm having its own merits and demerits. The proposed methodology concerns with the two major demerits of LWSE Algorithm which are the data size and allowed characters in the dataset. Un-like LWSE algorithm, the proposed methodology works fine for variable data size and can encrypt all the types of special characters. Instead of using the same key for encryption in LWSE, the proposed methodology uses different keys that are generated using a random key generator. This enhanced LWSE algorithm is combined with OTP algorithm to add another layer of security.


Title of the Paper: About Informational Utility of Investment Project Expertise

 

Authors: Danko E. V., Oskorbin N. M., Ternovoy O. S.

Pages: 96-99

Abstract: In the context of this article the effectiveness of an investment project is evaluated by net present value, this index is considered a random variable which can be estimated by an investor as a segment. Main difficulties in decision-making process arise when this segment includes zero value. In order to reduce the rate of uncertainty an investor can decide to carry out an expertise. And so the main features of utility evaluation of an investment expertise are explained in the article.


Title of the Paper: Contours of Combinatorial Super-Pixel Grouping for Object Extraction

 

Authors: Hong Cheng, Yuhua Luo, Yunjiang Yu

Pages: 90-95

Abstract: We propose a unified contours grouping approach for object extraction via super-pixel, which has strong contour support in the image. For this purpose, we first develop a fast sub-segments algorithm. We then propose a new cost function that makes effective promote spatially coherent sets of superpixels with object boundary. Finally, we use a grouping strategy that combines our sub-segments into highly-accurate super-pixel by exploring efficiently their gap space. We evaluate the proposed method against two leading contour closure approaches in the literature on the BSDS500. The results demonstrate that the proposed object extraction method performs both good accuracy and time efficiency against other state-of-the-art methods.


Title of the Paper: Analyzing the Students’ Preferences in an Active-Learning Experience

 

Authors: Sandra Baldassarri, Teresa Coma, Antonio Aguelo, Cecilia V. Sanz, Pedro Álvarez

Pages: 82-89

Abstract: In order to improve the students’ motivation and their learning process a new student-centered methodology was introduced in a subject of Computer Science Engineering. It proposes a set of activities based on the combination of different autonomous and active learning techniques. From the methodology perspective, it is to provide them a continuous feedback about the obtained results and to consider the students’ learning style to select the activities. To achieve these goals, we have combined questionnaires for identifying learning styles, conceptual maps for representing the acquired knowledge during the fulfilment of activities, and semantic technologies for the automatic assessment of this learning. Besides, a Web and service-oriented tool, called M eRoDes, has been developed to support the proposed activities and to implement the evaluation system based on conceptual maps. The experience carried out proved that the desired improvements have been achieved.


Title of the Paper: Towards Building of Cable TV Content-Sensitive Adaptive Monitoring and Management Systems

 

Authors: Vasiliy Yu. Osipov, Natalia A. Zhukova, Alexander I. Vodyaho, Andrey Kalmatsky, Nikolay G. Mustafin

Pages: 75-81

Abstract: The problem of building of cable TV content-sensitive monitoring and management systems is discussed. Alternative approaches to solving this problem are analyzed. Architecture of perspective content-sensitive monitoring and management system with extended functionality is suggested. Requirements for automatic scripts generation are formulated and possible solutions are suggested.


Title of the Paper: Improving Error-Diffused HEVC Intra Prediction

 

Authors: Chien-Hung Chen, Yinyi Lin

Pages: 71-74

Abstract: In our previous work, an error-diffused intra prediction algorithm for HEVC was suggested to improve the coding performance. Tested on HM11.0rec1 the proposed algorithm achieves average 0.5% BDBR reduction with 21% increase in total encoding time, compared to the HEVC intra prediction. In this paper we modify the error-diffused algorithm to further improve its computation efficiency from three aspects. In the proposed algorithm, a smaller errordiffused mask is used and a direct gradient computation and error diffusion is employed to reduce computation instead of from both vertical and horizontal directions. In addition, the error diffusion algorithm is performed in the RMD process instead of RDO process. The experiment is evaluated on HM15.0, and the results reveal that average 0.6% BDBR reduction can be achieved in the proposed algorithm with only average 5% increase in encoding time compared to the HEVC intra prediction, and that is much lower than the original error diffusion algorithm.


Title of the Paper: Person Identification Using Fusion of Fingerprint and Hand Geometry

 

Authors: Sampada Abhijit Dhole, Varsha Hemant Patil

Pages: 64-70

Abstract: Biometric system is essentially a pattern recognition system that makes use of biometric traits to recognize individuals. Authentication systems built on only one biometric modality may not fulfil the requirements of demanding applications in terms of properties such as performance, acceptability and distinctiveness. Most of the unimodal biometrics systems have problems such as noise in collected data, intra-class variations, inter-class variations, nonuniversality etc. Some of these limitations can be overcome by multiple source of information for establishing identity, such systems are known as multimodal biometric systems. The multimodal biometric systems are gaining popularity because of accurate and reliable identification of the person. Early integration strategies are expected to result in better performance than late integration strategies. A novel approach of combining fingerprint and hand geometry biometric at feature level fusion are presented. Feature level integration of two different uncorrelated biometric traits provides sustainable improvement in performance accuracy compared to their unimodal counterpart.


Title of the Paper: Investigating for Road Roughness Using Smartphone Sensors

 

Authors: Seyed Yousef Sadjadi

Pages: 56-63

Abstract: Smartphones are equipped with sensors such as accelerometers, gyroscope and GPS in one cost-effective device with an acceptable level of accuracy. Research has been carried out to determine the roughness of roads via smartphones. In order to justify the validity of using smartphones as tool to define the roughness of the road, it must be compared to other subjective methods such as user opinion. The aim of this paper is to calculate the roughness of the road via a smartphone using its embedded sensors. Additionally, this paper will investigate the correlation of road roughness with a user opinion to conclude ride quality. Moreover, the applicability of using smartphones to assess the road surface distresses is examined. Furthermore, to validate the smartphone sensor outputs objectively, the Road Surface Profiler is applied. Finally, a good roughness model is developed which demonstrates an acceptable level of correlation between the road roughness measured by smartphones and the ride quality rated by users.


Title of the Paper: MUSIC Extracted Direction of Arrival Estimates Using spatial smoothing Improvement for Indoor Localization Using RFID

 

Authors: Zier Adel, Fergani Lamya

Pages: 51-55

Abstract: Due to its cheap price and no line of sight requirement, the RFID has emerge the field of indoor localization. Many algorithms have been developed, the classical ones have used the received signal strength informations as a metric, but due to the low degree of accuracy, we propose in this paper an other algorithm known as MUSIC algorithm which estimate the direction of arrival to determine the coordinates of the tag, then we will apply a spatial smoothing to eliminate the problem of interference.We will show through some simulations on Matlab software the most improvements like the accuracy that this technique has brought


Title of the Paper: Performance Comparison of FPGAs and GPUs: Solving Sparse Matrices Case-Study

 

Authors: Khaled Salah, Mohamed AbdelSalam

Pages: 45-50

Abstract: In this paper, performance comparison of FPGAs and GPUs are introduced. Numerical methods to solve sparse matrices are evaluated as the main case-study. The experimental results showed that GPUs show superior performance over FPGAs/HW Emulation in terms of run time for small #equations. For large number of equations “in order of ten millions”, the FPGAs/HW emulation outperforms GPUs as the parallelism rate of the emulation becomes higher in that case.


Title of the Paper: Ham - Spam Filtering Using Kernel PCA

 

Authors: Issam Dagher, Rima Antoun

Pages: 38-44

Abstract: Electronic mails have become one of the most important ways of communication. Email filtering is a very important task. The objective of this paper is to study the Kernel Principal Component Analysis classifier implemented for email filtering process (Ham vs. spam emails). Different experiments were done using a public corpus extracted from the University of California-Irvine Machine Learning Repository. Different training and test sets were used. A comparison with PCA, Support Vector Machine and Bayes detector was done to prove its superior behavior.


Title of the Paper: Bilateral Filter Based Image Denoising

 

Authors: Md. Shaiful Islam Babu, Ping Ping

Pages: 34-37

Abstract: The bilateral clarify is a nonlinear filter goes spatial averaging without smoothing edges and It has shown to be an efficient image denoising technique. We can also apply this method to the blocking artifacts reduction. A large issue with the application of the bilateral filter is the selection of the filter parameters which could change the results significantly. In other hand research interest of bilateral filter is an advance of the computation speed. Concerning there are several contributions of this research firstly it is an empirical study of the optimal bilateral filter parameter selection in image denoising. Here I propose a development of bilateral filter, a multi-resolution bilateral filter where this method is applied to the low-frequency sub-band of a signal decomposed using through a wavelet filter bank. The multi-resolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Secondly, the contribution is that spatially adaptive method to reduce compression artifacts to escape over smoothing texture regions and to completely eliminate blocking and ringing artifacts. In this paper texture region and block, boundary discontinuities are first detected these are then used to control/adapt the spatial and intensity parameters of the bilateral filter. The experimental result could prove that the adaptive method can improve the quality of restored images significantly better than the standard bilateral filter. Thirdly the contribution is the progress of the fast bilateral filter, in which I use a combination of multi-windows to approximate the Gaussian filter more accurately. The bilateral filter is a weighted convolution filter with each pixels weight dependent on the distance from the center both in space and value. Its advantage preserving properties make it suitable for a number of applications including detail enhancement, noise removal, and tone mapping.


Title of the Paper: Joint Precoder and Decoder Design for SU-MIMO System with Individual Transmit Power Constraint and Improper Constellation

 

Authors: Raja Muthalagu

Pages: 26-33

Abstract: This paper considers the issue of designing joint optimum precoder and decoder for single-user multiple-input multiple-output system. Most of the previous works on joint precoder and decoder designs are based on the total transmit power constraint (TTPC) with proper modulation techniques. On the other hand, in practice, individual transmit power constraint (ITPC) is more realistic as the power at each antenna at the transmitter is restricted independently by the linearity of the power amplifier. In this paper, a minimum total mean squared error (TMSE) design is formulated as a nonconvex optimization problem under equal power allocation (EPA) and the power constraint that jointly meets both EPA and TTPC (i.e ITPC). The closed-form optimum linear precoder and decoder for singleuser multiple-input multiple-output (SU-MIMO) systems with an improper modulation are determined by solving this nonconvex optimization problem. It considers both the perfect and imperfect channel state information (CSI) is available at both the transmitter and receiver. The simulation results show the performance improvement of the proposed work over conventional work in terms of bit error rate (BER).


Title of the Paper: About One Approach to Multilevel Behavioral Program Synthesis for Television Devices

 

Authors: Vasiliy Yu. Osipov, Natalia A. Zhukova, Alexander I. Vodyaho

Pages: 17-25

Abstract: An approach for multilevel synthesis of programs is suggested. The mathematical model of reconfigurable program in a form of relative finite state operational automata is introduced. On the base of this model the method of multilevel automatic synthesis is developed. Suggested approach can be used for generation of behavioral programs for set-top boxs in the domain of cable television.


Title of the Paper: Location and Tracking a Three Dimensional Target with Distributed Sensor Network Using TDOA and FDOA Measurements

 

Authors: Yee Ming Chen, Chi-Li Tsai, Ren-Wei Fang

Pages: 11-16

Abstract: This paper considers recursive tracking of one mobile target using a sequence of time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurement pairs obtained by distributed sensor network in a three dimension situation. As the conventional target tracking using TDOA measurement is not accurate enough to estimate the target location, we use the TDOA and FDOA measurement signals together to estimate the location and the velocity of a target at discrete times. Although, the Kalman filter shows remarkable performance in calculation and location estimation, the estimation error can be large when the priori noise covariances are assumed with improper values. We proposed an adaptive extended Kalman filter (AEKF) to update the noise covariance at each TDOA/FDOA measurement and estimation process. Although many methods derive the estimates of position and velocity with iterative numerical techniques, the proposed AEKF method can be a good alternative to update the noise covariance guess under conditions of measurement error. The simulation results show that the algorithm efficiently reduces the position error and it also greatly improves the accuracy of target tracking. It is proven that the AEKF algorithm deals with the nonlinear nature of the mobile target tracking problem successfully.


Title of the Paper: Effective Crypto-Compression Scheme for Medical Images

 

Authors: Karim Abdmouleh, Salim Bouhlel

Pages: 6-10

Abstract: The use of telecommunications and information technologies in the medicine sector were evolved breathtakingly last years. This involves the development of the applications bound to the telemedicine. Seen the importance of this discipline in the improvement of the care quality, the reduction of treatment costs, and in the universalization of the medical practices and knowledges, the optimization of medical related applications remains a necessity. In this meaning, we propose in this work an efficient scheme for the transmission and the storage of medical images. This scheme is applied to the on line medical folder sector that is, currently, one of the most potential sector in telemedecine. A new approach concerning the integration of the partial encryption in compression algorithms based on the RLE encoding technique, will be presented and developed.


Title of the Paper: SQL Injection Principle Against BB84 Protocol

 

Authors: H. Amellal, A. Meslouhi, Y. Hassouni, A. El Allati

Pages: 1-5

Abstract: In order to study and analyze the security of quantum communications, we propose in this work a new quantum attack strategy alias ”Malware Photon Injection Attack”(MPIA). In this attack we based on the philosophy of the classical attack ”SQL Injection” and the physical properties of quantum entanglement. The effectiveness of ”MPIA” is proved by the analyze of mutual information quantity variation between emitter-receiver and emitter-Eavesdropper.