Proceedings of the International Workshop on Computational Intelligence in Security for Information Systems CISIS 2008


Book Description

The research scenario in advanced systems for protecting critical infrastructures and for deeply networked information tools highlights a growing link between security issues and the need for intelligent processing abilities in the area of information s- tems. To face the ever-evolving nature of cyber-threats, monitoring systems must have adaptive capabilities for continuous adjustment and timely, effective response to modifications in the environment. Moreover, the risks of improper access pose the need for advanced identification methods, including protocols to enforce comput- security policies and biometry-related technologies for physical authentication. C- putational Intelligence methods offer a wide variety of approaches that can be fruitful in those areas, and can play a crucial role in the adaptive process by their ability to learn empirically and adapt a system’s behaviour accordingly. The International Workshop on Computational Intelligence for Security in Inf- mation Systems (CISIS) proposes a meeting ground to the various communities - volved in building intelligent systems for security, namely: information security, data mining, adaptive learning methods and soft computing among others. The main goal is to allow experts and researchers to assess the benefits of learning methods in the data-mining area for information-security applications. The Workshop offers the opportunity to interact with the leading industries actively involved in the critical area of security, and have a picture of the current solutions adopted in practical domains. This volume of Advances in Soft Computing contains accepted papers presented at CISIS’08, which was held in Genova, Italy, on October 23rd–24th, 2008.




Intermittently Connected Mobile Ad Hoc Networks


Book Description

In the last few years, there has been extensive research activity in the emerging area of Intermittently Connected Mobile Ad Hoc Networks (ICMANs). By considering the nature of intermittent connectivity in most real word mobile environments without any restrictions placed on users’ behavior, ICMANs are eventually formed without any assumption with regard to the existence of a end-to-end path between two nodes wishing to communicate. It is different from the conventional Mobile Ad Hoc Networks (MANETs), which have been implicitly viewed as a connected graph with established complete paths between every pair of nodes. For the conventional MANETs, mobility of nodes is considered as a challenge and needs to be handled properly to enable seamless communication between nodes. However, to overcome intermittent connectivity in the ICMANs context, mobility is recognized as a critical component for data communications between the nodes that may never be part of the same connected portion of the network. This comes at the cost of addition considerable delay in data forwarding, since data are often stored and carried by the intermediate nodes waiting for the mobility to generate the next forwarding opportunity that can probably bring it close to the destination. Such incurred large delays primarily limit ICMANs to the applications, which must tolerate delays beyond traditional forwarding delays. ICMANs belong to the family of delay tolerant networks (DTNs). However, the unique characteristics (e.g., self-organizing, random mobility and ad hoc based connection) derived from MANETs distinguish ICMANs from other typical DTNs such as interplanetary network (IPN) with infrastructure-based architecture. By allowing mobile nodes to connect and disconnect based on their behaviors and wills, ICMANs enable a number of novel applications to become possible in the field of MANETs. For example, there is a growing demand for efficient architectures for deploying opportunistic content distribution systems over ICMANs. This is because a large number of smart handheld devices with powerful functions enable mobile users to utilize low cost wireless connectivities such as Bluetooth and IEEE 802.11 for sharing and exchanging the multimedia contents anytime anywhere. Note that such phenomenal growth of content-rich services has promoted a new kind of networking where the content is delivered from its source (referred to as publisher) towards interested users (referred to as subscribers) rather than towards the pre-specified destinations. Compared to the extensive research activities relating to the routing and forwarding issues in ICMANs and even DTNs, opportunistic content distribution is just in its early stage and has not been widely addressed. With all these in mind, this book provides an in-depth discussion on the latest research efforts for opportunistic content distribution over ICMANs.




High Performance Computing - HiPC 2004


Book Description

This book constitutes the refereed proceedings of the 11th International Conference on High-Performance Computing, HiPC 2004, held in Bangalore, India in December 2004. The 48 revised full papers presented were carefully reviewed and selected from 253 submissions. The papers are organized in topical sections on wireless network management, compilers and runtime systems, high performance scientific applications, peer-to-peer and storage systems, high performance processors and routers, grids and storage systems, energy-aware and high-performance networking, and distributed algorithms.




Security for Cloud Storage Systems


Book Description

Cloud storage is an important service of cloud computing, which offers service for data owners to host their data in the cloud. This new paradigm of data hosting and data access services introduces two major security concerns. The first is the protection of data integrity. Data owners may not fully trust the cloud server and worry that data stored in the cloud could be corrupted or even removed. The second is data access control. Data owners may worry that some dishonest servers provide data access to users that are not permitted for profit gain and thus they can no longer rely on the servers for access control. To protect the data integrity in the cloud, an efficient and secure dynamic auditing protocol is introduced, which can support dynamic auditing and batch auditing. To ensure the data security in the cloud, two efficient and secure data access control schemes are introduced in this brief: ABAC for Single-authority Systems and DAC-MACS for Multi-authority Systems. While Ciphertext-Policy Attribute-based Encryption (CP-ABE) is a promising technique for access control of encrypted data, the existing schemes cannot be directly applied to data access control for cloud storage systems because of the attribute revocation problem. To solve the attribute revocation problem, new Revocable CP-ABE methods are proposed in both ABAC and DAC-MACS.







Euro-Par 2013: Parallel Processing Workshops


Book Description

This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 19th International Conference on Parallel Computing, Euro-Par 2013, held in Aachen, Germany in August 2013. The 99 papers presented were carefully reviewed and selected from 145 submissions. The papers include seven workshops that have been co-located with Euro-Par in the previous years: - Big Data Cloud (Second Workshop on Big Data Management in Clouds) - Hetero Par (11th Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms) - HiBB (Fourth Workshop on High Performance Bioinformatics and Biomedicine) - OMHI (Second Workshop on On-chip Memory Hierarchies and Interconnects) - PROPER (Sixth Workshop on Productivity and Performance) - Resilience (Sixth Workshop on Resiliency in High Performance Computing with Clusters, Clouds, and Grids) - UCHPC (Sixth Workshop on Un Conventional High Performance Computing) as well as six newcomers: - DIHC (First Workshop on Dependability and Interoperability in Heterogeneous Clouds) - Fed ICI (First Workshop on Federative and Interoperable Cloud Infrastructures) - LSDVE (First Workshop on Large Scale Distributed Virtual Environments on Clouds and P2P) - MHPC (Workshop on Middleware for HPC and Big Data Systems) -PADABS ( First Workshop on Parallel and Distributed Agent Based Simulations) - ROME (First Workshop on Runtime and Operating Systems for the Many core Era) All these workshops focus on promotion and advancement of all aspects of parallel and distributed computing.




Data Deduplication Approaches


Book Description

In the age of data science, the rapidly increasing amount of data is a major concern in numerous applications of computing operations and data storage. Duplicated data or redundant data is a main challenge in the field of data science research. Data Deduplication Approaches: Concepts, Strategies, and Challenges shows readers the various methods that can be used to eliminate multiple copies of the same files as well as duplicated segments or chunks of data within the associated files. Due to ever-increasing data duplication, its deduplication has become an especially useful field of research for storage environments, in particular persistent data storage. Data Deduplication Approaches provides readers with an overview of the concepts and background of data deduplication approaches, then proceeds to demonstrate in technical detail the strategies and challenges of real-time implementations of handling big data, data science, data backup, and recovery. The book also includes future research directions, case studies, and real-world applications of data deduplication, focusing on reduced storage, backup, recovery, and reliability. - Includes data deduplication methods for a wide variety of applications - Includes concepts and implementation strategies that will help the reader to use the suggested methods - Provides a robust set of methods that will help readers to appropriately and judiciously use the suitable methods for their applications - Focuses on reduced storage, backup, recovery, and reliability, which are the most important aspects of implementing data deduplication approaches - Includes case studies




Computer Architecture


Book Description

Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook from Hennessy and Patterson, winners of the 2017 ACM A.M. Turing Award recognizing contributions of lasting and major technical importance to the computing field, is fully revised with the latest developments in processor and system architecture. The text now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google's newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design. - Winner of a 2019 Textbook Excellence Award (Texty) from the Textbook and Academic Authors Association - Includes a new chapter on domain-specific architectures, explaining how they are the only path forward for improved performance and energy efficiency given the end of Moore's Law and Dennard scaling - Features the first publication of several DSAs from industry - Features extensive updates to the chapter on warehouse-scale computing, with the first public information on the newest Google WSC - Offers updates to other chapters including new material dealing with the use of stacked DRAM; data on the performance of new NVIDIA Pascal GPU vs. new AVX-512 Intel Skylake CPU; and extensive additions to content covering multicore architecture and organization - Includes "Putting It All Together" sections near the end of every chapter, providing real-world technology examples that demonstrate the principles covered in each chapter - Includes review appendices in the printed text and additional reference appendices available online - Includes updated and improved case studies and exercises - ACM named John L. Hennessy and David A. Patterson, recipients of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry




Security Engineering for Cloud Computing: Approaches and Tools


Book Description

"This book provides a theoretical and academic description of Cloud security issues, methods, tools and trends for developing secure software for Cloud services and applications"--Provided by publisher.