Felipe Takeshi Ishizu, Jeferson Tadeu de Lima and João Antonio Aparecido Cardoso
Federal Institute of Education, Science and Technology of São Paulo - IFSP Bragança Paulista, São Paulo, Brazil.
The present work aims to present how the use of a blockchain two-factor authentication solution 2FA on a page developed in WordPress can contribute to the information security regarding user authentication.The realized research method is characterized as an exploratory research, since all the analysis is based on the theoretical reference data available on the subject, and a case study was carried out in relation to the implementation of the multi-factor authentication plugin Hydro Raindrop MFA, which uses blockchain technology offered by the Hydrogen Technology Corporation and Ethereum platform. Thus, we seek to punctuate and conceptualize some of the technologies used, pointing out their contribution to information security. The main results showed that the use of decentralized technology such as block-chain contributes considerably to user security in the authentication matter.
Authentication, 2FA, Blockchain, Security, Hydro Raindrop.
Liang xinmei and Luqin
Qilu University of Technology (Shandong Academy of Sciences) Jinan, Shandong.
The traditional service selection method is to compare the QoS attributes of the service to select services with better QoS attributes. This method will be very time consuming. This paper uses the database query technology - Skyline to select services and extract the SP (Skyline point) service in the Web service. And the Skyline algorithm is further improved, the algorithm divides the entire service set into one area. The HNBS algorithm can effectively filter and reduce the dominance check between the areas that do not have the dominant relationship, which saves the memory space and greatly improves the execution efficiency. Finally, the accuracy and high efficiency of the HNBS algorithm are verified on the simulation data set and the real data set (QWS).
Skyline, regional division, Skyline point, HNBS, QoS(key words)
Shafiq Rehman, M. Ceglia, Volker Gruhn, S. Siddique
University of Duisburg-Essen, Institute of Software Technology,Germany
Cyber-Physical Systems (CPS) and Internet-of-Things (IoT) are rising in an importance for the modern world, security is a significant requirement in the development process. This paper presents an overview of how these systems generally work and why securing them at three different layers that are more important compared to traditional software. Since human beings directly interact with the physical devices, therefore it is important to guarantee that they are secure at any point in time. If an intruder gains control over an autonomous car driving system, the consequences could be life-threatening to many people. Cyber-security is trying to prevent this worst-case scenario but struggles with two major problems: First, CPS/IoT are typically a combination of many separate systems. These systems are often legacy software, which sometimes is no longer up-to-date and could provide many security issues, which are hard to patch since the original developers are not part of the new project. Second, classic security measures aimed at the application and the network layer but CPS/IoT provides a third, physical layer, which needs to be protected. Sensors and actuators are physical parts of the real environment and can be damaged or destroyed by any human intruder or natural disaster if not secured properly. Since security in CPS and IoT have been dramatically neglected in the past years, this paper aims at understanding the security of CPS and IoT. Therefore, in this paper we propose an architecture, where we analyzed the security threats for CPS/IoT. Hence, can easily determine the security requirements for CPS/IoT.
cyber-physical system (CPS); internet-of-things (IoT); security; threat; vulnerability.
K. K. Saini1 and Ms Mehak Saini2
1IIMT College of Engineering, Greater Noida, UP 2Lovely Professional University, Jullunder, Punjab
Image segmentation is a fundamental step in the modern computational vision systems and its goal is to produce amore simple and meaningful representation of the image making it easier to analyze. Imagesegmentation is a subcategory of image processing ofdigital images and, basically, it divides a given image into two parts: the object(s) of interest and the background. Image segmentation is typically used to locate objects and boundaries in images and its applicability extends to other methods such as classification, feature extraction and pattern recognition. Most methods are based on histogram analysis, edge detection and region-growing. Currently, other approaches are presented such as segmentation by graph partition, using genetic algorithms and genetic programming. This paper presents a review of this area, starting with taxonomy of the methods followed by a discussion of the most relevant ones.
Image segmentation , histogram analysis & Edge detectors.
Zayar Aung1 and Mihailov Ilya Sergeevich2
1National Research University “MPEI”, Krasnokazarmennayast., Russia
2National Research University “MPEI”, Russia
This paper examines the possibility of implementing a prototype of the precedent (Case-Based Reasoning,CBR) intelligent (expert) system (CBR system) to find a result based on accumulated experiences (base of precedents) with the use of fuzzy sets. The ability of prototype of Fuzzy CBR system to deal with fuzzy information in the description of precedents and during their extraction provides a more wide range of applications of CBR systems and flexible mechanism for finding solutions in the presence of fuzzy information in the queries users of the system and expert knowledge, which is especially important at the initial stage of operation of the system in the presence of a small amount of accumulated precedents.
Intelligent (expert) system, CBR system, Fuzzy CBR system, fuzzy information
Hoon Ko1, Chang Choi2, Htet Myet Lynn3 and Junho Choi4
1IT Research Institute, Chosun University, Gwangju, South Korea
2IT Research Institute, Chosun University, Gwangju, South Korea
3Department of Computer Engineering, Chosun University, Gwangju, South Korea
4Division of Undeclared Majors, Chosun University, Gwangju, South Korea
The number of services and smart devices which require context is increasing, and there is a clear need for new security policies which provide security that is convenient and flexible for the user. In particular, there is an urgent need for new security policies regarding IT vulnerability layers for children, the elderly, and the disabled who experience many difficulties using current security technology. For a convenient and flexible security policy, it is necessary to collect and analyze data such as user service use patterns, locations, etc., which can be used to distinguish attack contexts and define a security service provision technology which is suitable to the user. This study has designed a user context-aware network security architecture which reflects the aforementioned requirements, collected user context-aware data, studied a user context analysis platform, and studied and analyzed context-aware security applications.
Context-aware Security, Network Security Policy, Malicious Code Detection
Putu Artawan1,2 Yono Hadi Pramono1, Mashuri1 Josaphat T. Sri Sumantyo3
1 Physics Department, Faculty of Natural Sciences, Institut Teknologi Sepuluh Nopember (ITS), Surabaya, Indonesia
2Physics Department, Faculty of Mathematics and Natural Sciences, Ganesha University of Education, Singaraja, Bali, Indonesia
3 Josaphat Microwave Remote Sensing Laboratory, Center for Environmental Remote Sensing (CEReS) Chiba University, Japan
This paper presents the designed of varians array in curved microstripline antenna for radar communication. The antenna geometry comprises of three varians in matrics 2x2, 2x4 and 4x4 dimensions. The several array operates in C-Band frequencies (4GHz – 8GHz) and X-Band frequencies (8GHz-12GHz) with a 1.82 VSWR, -18.72dB Return loss, 0.29 reflection coefficient, and 5.8dB gain for 2x2 array, 1.64 VSWR, -16.17dB Return loss, 0.24 reflection coefficient, and 5.4dB gain for 2x4 array, 1.04 VSWR, -37.70dB Return loss, 0.19 reflection coefficient, and 7.6dB gain for 4x4 array. All of the varians in array elements are feed using a direct feeding technique. This array antenna is suitable developed for use in radar communication systems.
Array, Curved microstripline, Radar communication, C-Band, X-Band
Hal Cooper1, Garud Iyengar1, and Ching-Yung Lin2
1Department of Industrial Engineering and Operations Research
2Department of Electrical Engineering, Columbia University, New York, USA
Graph databases and distributed graph computing systems have traditionally abstracted the design and execution of algorithms by encouraging users to take the perspective of lone graph objects, like vertices and edges. In this paper, we introduce the SmartGraph, a graph database that instead relies upon thinking like a smarter device often found in real-life computer networks, the router.Unlike existing methodologies that work at the subgraph level, the SmartGraph is implemented as a network of articially intelligent Communicating Sequential Processes. The primarily goal of this design is to give each \router" a large degree of autonomy. We demonstrate how this design facilitates the formulation and solution of an optimization problem which we refer to as the \router representation problem", wherein each router selects a benecial graph data structure according to its individual requirements (including its local data structure, and the operations requested of it). We demonstrate a solution to the router representation problem wherein the combinatorial global optimization optimization problem with exponential complexity is reduced to a series of linear problems locally solvable by each AI router.
Intelligent Information, Database Systems, Graph Computing
Xinxin Shen and Kougen Zheng, Zhejiang University, Hangzhou, China
The operation of substitution in λ-calculus is treated as an atomic operation. It makes that substitution operation is complex to be analyzed. To overcome this drawback, explicit substitution systems are proposed. They bridge the gap between the theory of the λ-calculus and its implementation in programming languages and proof assistants. λ_o-calculus is a name-free explicit substitution. Intersection type system for various explicit substitution calculi are studies. In this paper, we put our attention to λ_o-calculus. We present an intersection type system for λ_o-calculus and show it satisfies the subject reduction property.
Intersection type, Lambda calculus, Director strings, Subject reduction
Nan Zhang and Zhenyu Liu, Zhejiang University, China
Ozgur Koray Sahingoz, Universiti Pendidikan Sultan Idris (UPSI), Malaysia
Assembly sequence planningis one of the most important activities in assembly process and proved to be a NP-hard optimization problem. In this article, a novel assembly subsets prediction method based on precedence is first proposed to generate feasibleassembly sequences. Then by using a simplified firework algorithm, the proposed method can easily obtain different optimal assembly sequences within a very short period of time even the assembly product is complicated, considerable computing time is economized. Unlike traditional assembly sequence planning methods, which optimizes assembly sequence by adjusting the order of a complete sequence, the proposed method breaks the rule of taking assembly sequence as a whole and optimizes the optimal solution in the construction process of assembly sequence. The method is compared with other algorithmsand is verified to be successful in obtaining the different optimal assembly sequences for assembly products and economizingan enormous amount of computing time.
Assembly sequence planning, assembly subsets, precedence matrix, firework algorithm
Yuka Toyoshima1, Yasuhiro Hayashi2 and Yasushi Kiyoki1
1Keio University, Japan
2Musashino University, Japan
Many environmental issues have occured in this world and these issues are common to all human beings. It is considered that environmental issues caused by humans exist in the “border” between nature and human society. In other words, there is the possibility that finding the “border” leads to determine the cause of environmental issues and discover the solution. This paper presents an environment-visualization system with image-based retrieval and distance calculation method as the first step of research for finding the “border”. We focused on the plastic garbage issue which is related to SDGs14 and this study was made to find the “border”, source of the plastic garbage which is scattered on the coast area. In addition, we aim to realize the system which enables people to share the knowledge about the plastic issue in order to acquire knowledge of the environment issues and to promote concrete action to realize sustainable nature and society. In this system, there are 3 features: (1) Composition-Based Image Retrieval Function, (2) Spatio-Temporal-Based Mapping Function, and (3) Coast-area Location-Checking for Selected Images Function. (1) is the image retrieval function for detecting highly-related images to a query-image with dividing one image to three images for separating with nature, human society, and “border”. We used euclid calculation to calculate the similarity and show the results in the ranking format. (2) is the mapping function with using the spatio-temporal information which is accompanied with images. (3) is the location-checking function to judge whether the photographing spot is near the ocean or not with image processing metric and select the images only which are near the ocean. We present several experimental results to clarify the feasibility and effectiveness of our method.
Environment-Visualization, Image-based Retrieval, Image Processing, Distance Calculation, SDGs (Sustainable Development Goals)
Chen Kim Lim, Kian Lam Tan and Nguarije Hambira
Ozgur Koray Sahingoz, Universiti Pendidikan Sultan Idris (UPSI), Malaysia
The 21st century came with its own challenges as much as it brought various benefits through the advancement in technology. Cultural heritage is one such “casualty” of the developments in the 21st century in that there has been a decline in appreciation and awareness of the importance of cultural heritage. Thus, the present study was necessitated with the primary aim of (i) preservation of the intangible cultural heritage of the people of Georgetown through the development of the E-George Town Digital Heritage (E-GDH) system (ii) develop an effective GUI for the E-DGH system in order to stimulate and captivate the attention of users with the aim of raising awareness as well to educate the masses on the importance of cultural heritage and (iii) to evaluate the effectiveness of the developed system in relation to its objective through the administration of questionnaires to target respondents. To this effect the study employed the use of the waterfall model to develop this E-GDH website. The study found that respondents (prior to using the E-GDH system) had no previous experience in terms of oral story telling from their parents. Overall, it was found that the GUI was pleasant and attractive for use by respondents and that they were able to learn easily as a result. Based on the fact that respondents where able to learn with ease due to an effective GUI, the study also revealed that the content they were learning from this website was actually easy for them to understand and that this website was indeed helpful in helping them to understand and appreciate cultural heritage. The meaning and conduct of the education sector in this era of advanced technology has shifted a lot over the years changing from teachers as the primary source of information to what is termed as “learner –centred” where they are given the leeway to learn, explore and make sense of the world around them and the findings from this study falls no short from this notion. The E-GDH website could be used by schools in subjects such as history where the teacher could use this website as reference point for a certain lesson outcome that deals with digital cultural heritage or intangible cultural heritage. Thus the study contributes immensely to the understanding of cultural heritage by raising awareness as well as stimulating the inters of the young generation to appreciate and learn more about their cultural heritage. The prominence of digitalising the intangible cultural heritage cannot be emphasised enough as recent study has shown decline in interests in these area so the development of the E-GDH is one such positive call to action in response to UNESCO’s 2003 call for preservation of intangible cultural heritage and by extension, educating and raising awareness on the importance of cultural heritage.
Cultural Education, Digital Library, Digital Cultural Heritage, Digital Asset Management System, Digital Preservation
Mingchen Li1, Zili Zhou1, 2 and Yanna Wang1
1Qufu Normal University, China and 2East China Normal University, China
In recent years, problem solving, automatic proof and human-like test-tasking have become a hot spot of research. This paper focus on the study of solving physical problem in Chinese. Based on the analysis of physical corpus, it is found that the physical problem are made up of n-tuples which contain concepts and relations between concepts, and the n-tuples can be expressed in the form of UP-graph(The graph of understanding problem), which is the semantic expression of physical problem. UP-graph is the base of problem solving which is generated by using physical knowledge graph (PKG). However, current knowledge graph is hard to be used in problem solving, because it cannot store methods for solving problem. So this paper presents a model of PKG which contains concepts and relations, in the model, concepts and relations are split into terms and unique IDs, and methods can be easily stored in the PKG as concepts. Based on the PKG, DKP-solving is proposed which is a novel approach for solving physical problem. The approach combines rules, statistical methods and knowledge reasoning effectively by integrating the deep learning and knowledge graph. The experimental results over the data set of real physical text indicate that DKP-solving is effective in physical problem solving.
Knowledge Graph, Deep Learning, Problem Solving, & Physical Problem
Giti Javidi and Ehsan Sheybani,University of South Florida, USA
This paper discusses an integrative model to raise interest among high school students in cybersecurity. As part of the model, several online modules have been created and tested by 30 high school teachers. The results of surveys used to collect data regarding teacher perception of the modules and their preparedness to teach the content to their students are presented. This is an ongoing project with the ultimate goal of developing strategies for addressing shortage of cybersecurity workforce.
K-12, Cybersecurity, Education, Workforce development, Teacher Education, Academic Engagement
Department of Information Technology, Indian Institute of Information Technology, Allahabad, India
TThe students of undergraduate expect simple, interactive and understandable method of teaching. But, in many universities, instructors simply follows text book and solves the examples given in it. In such case, students cannot learn anything additional than the text book contents defined in the syllabus. This method of teaching will not motivate students to understand the subject in depth unless the industrial and practical applications of the subject explained. Once students do not show interest, instructor also loses the interest which leads to poor performance of students. Hence, it is mandatory to re-design the teaching method especially for the undergraduate students. This paper use concept maps for teaching or learning of a compiler design subject with assignments and problems related to research and industry
Compiler Design, Concept Map, Computer Science, Teaching
Qingfeng Wu and Xu Chen
Department of Software Engineering, Xiamen University, Xiamen, China.
In this study we propose a 3D reconstruction flow based on monocular video. The first step of our method is capturing the video data of target scene. Then, a video frame extraction algorithm is used to extract the video frames into a set of image sequences of the target scene. Next, an image similarity measurement algorithm is used to optimize the video frames. After that, the parameters of the virtual camera in the target scene are estimated by the standard Structure from Motion method and output sparse 3D point clouds. And then, dense 3D point clouds of target scene are obtained through Multi-View Stereo algorithm. The polygonal mesh is generated by Poisson Surface Reconstruction algorithm and the 3D scene model is finally obtained. Based on the above algorithms, we design a 3D reconstruction system. The system takes the video data of target object as input, and outputs the 3D scene model of the object. Finally, this paper verified the effectiveness and feasibility of the system through experimental analysis.
3D scene reconstruction, Monocular video, Structure from Motion
Ivo S. M. de Oliveira1,2 Oscar A. C. Linares1, Ary H. M. de Oliveira3, Glenda M. Botelho3 and João Batista Neto1
1Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, Brazil.
2Instituto Federal do Tocantins, Campus de Paraíso do Tocantins, Paraíso do Tocatins, Brazil.
3Universidade Federal do Tocantins, Palmas, Brazil
Despite the large number of techniques and applications in the field of image segmentation, it is still an open research field. A recent trend in image segmentation is the usage of graph theory. This work proposes an approach which combines community detection in multiplex networks, in which a layer represents a certain image feature, with super pixels. There are approaches for the segmentation of images of good quality that use a single feature or the combination of several features of the image forming a single graph for the detection of communities and the segmentation. However, with the use of multiplex networks is possible to use more than one image feature without the need for mathematical operations that can lead to the loss of information of the image features during the generation of the graphs. Through the related experiments, presented in this work, it is possible to identify that such method can offer quality and robust segmentations.
image segmentation, multiplex network, complex networks, louvain, super pixels
Yali Song and Jin Zhang
Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, China, Beijing
In recent years, research on iris recognition in near-infrared has made great progress and achievements. However,in many devices, such as most of the mobile phones, there is no near-infrared device embedded. In order to use iris recognition in these devices, iris recognition in visible light is needed, but there are many problems to use visible iris recognition, including low recognition rate, poor robustness and so on. In this paper, we first clarified the challenges in visible iris recognition. We evaluate the effectiveness of three traditional iris recognition on iris collected from smart phones in visible light. The results show that traditional methods achieve accuracy not exceeding 60% at best. Then we summarize the recent advances in visible iris recognition in three aspects: iris image acquisition, iris preprocessing and iris feature extraction methods. In the end, we list future research directions in visible iris recognition.
visible iris recognition, mobile phones, iris image acquisition, feature extraction
Suyeol Kim1, Chaehwan Hwang1, Jisu Kim2, Cheolhyeong Park2 and Deokwoo Lee2, 1Department of Biomedical Engineering, Keimyung University, Daegu, Republic of Korea and 2Department of Computer Engineering, Keimyung University, Daegu, Republic of Korea
Sleep apnea is considered one of the most critical problems of human health, and it is also considered one of the most important bio-signals in the area of medicine. In this paper, we propose the approach to detection and classification of respiratory status based on cross correlation between normal respiration and apnea, and on the characteristics of respiratory signals. The characteristics of the signals are extracted by analyzing frequency analysis. The proposed method is simple and straightforward so that it can be workable in practice. To substantiate the proposed algorithm, the experimental results are provided.
Respiration, Apnea, Fourier transform, Detection, Classification
Jisu Kim, Cheolhyeong Park, Ju O Kim and Deokwoo Lee, Department of Computer Engineering, Keimyung University, Daegu 42601, Republic of Korea
This paper chiefly deals with techniques of stereo vision, particularly focuses on the procedure of stereo matching. In addition, the proposed approach deals with detection of the regions of occlusion. Prior to carrying out stereo matching, image segmentation is conducted in order to achieve precise matching results. In practice, in stereo vision, matching algorithm sometimes suffers from insufficient accuracy if occlusion is inherent with the scene of interest. Searching the matching regions is conducted based on cross correlation and based on finding a region of the minimum mean square error of the difference between the areas of interest defined in matching window. Middlebury dataset is used for experiments, comparison with the existed results, and the proposed algorithm shows better performance than the existed matching algorithms. To evaluate the proposed algorithm, we compare the result of disparity to the existed ones.
Occlusion, Stereo vision, Segmentation, Matching
Cheolhyeong Park, Jisu Kim and Deokwoo Lee Department of Computer Engineering, Keimyung University, Daegu, 42601, Republic of Korea
This paper chiefly focuses on calibration of depth camera system, particularly on stereo camera. Owing to complexity of parameter estimation of camera, i.e., it is an inverse problem the calibration is still challenging problem in computer vision. As similar to the previous method of the calibration, checkerboard is used in this work. However, corner detection is carried out by employing the concept of neural network. Since the corner detection of the previous work depends on the exterior environment such as ambient light, quality of the checkerboard itself, etc., learning of the geometric characteristics of the corners are conducted. The pro-posed method detects a region of checkboard from the captured images (a pair of images), and the corners are detected. Detection accuracy is increased by calculating the weights of the deep neural network. The procedure of the detection is de-tailed in this paper. The quantitative evaluation of the method is shown by calculating the re-projection error. Comparison is performed with the most popular method, Zhang’s calibration one. The experimental results not only validate the accuracy of the calibration, but also shows the efficiency of the calibration.
Calibration, Neural network, Deep learning, Re-projection error, Depth camera
O. Juiña1, SC Hu2 and T.Lin3 1Department of Mechanical and Automation Engineering, National Taipei University of Technology, Taipei, 10608 Taiwan and 2,3 Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, Taipei, 10608 Taiwan,
In the field of clean room systems, the need to have high standards of cleaning and environmental control has generated the creation of new equipment which can solve the different problems of monitoring in particle filtration. The system proposed below has been developed based on new technologies like the evolution of camera sensors, the use of a beam laser to visualize particles, and the link between programming algorithms with free platforms. The first system used consisted of a Canon 650D camera with a 17-55mm lens, a Ld Pumped All-Solid-State Green Laser. Tests were performed inside a controlled environment where the external light insulation was removed, a Transparent FOUP (Front Opening Unified Pod) where the sample of white marble dust was introduced to see its dispersion among the particles. In order to implement a comparison of results, we used a Second System composed by a Camera High-Resolution CMOS Sensor with Global Shutter lt225. The third stage, is the image processing using OpenCV libraries, in this case, EmguCV which processes images, the fundamental principle of the image processing is the reading of each pixel, the intensities of each pixel and in the case of the processing of black and white images, each pixel receives values from 0 to 255, with 0 being the value for black and 255 for white. The program algorithm responds to these values and will separate the high-intensity values from the low-intensity values. In this case, the green color will become an important value, which by means of mathematical filters, will generate a clearer image of where the particles are located.
TUI Infotec GmbH, Contracting&Inventory – Product&Delivery, Hannover, Germany
In context of advancing digitalization in both private and public sectors, agile and lean management processes, methods and concepts are gaining continuously more impact. An often in the area of lean software development used concept is the minimum viable product (MVP). An MVP is a functional reduced but fully working software especially for experimental reasons. Like for every software, the system and application architecture have a huge impact also for the MVP development. This article carves out the area of conflict between application architectures on the one hand and the development of MVPs on the other hand and submits a proposal to manage issue
MVP, Software Architecture, Software Development, Lean Development
Copyright © CoNeCo-2019