Semantic Service Integration for Smart Grids


Book Description

The scope of the research presented includes semantic-based integration of data services in smart grids achieved through following the proposed (S2)In-approach developed corresponding to design science guidelines. This approach identifies standards and specifications, which are integrated in order to build the basis for the (S2)In-architecture. A process model is introduced in the beginning, which serves as framework for developing the target architecture. The first step of the process stipulates to define requirements for smart grid ICT-architectures being derived from established studies and divided into two classes: architecture and non-functional requirements (NFR). Based on the architecture requirements, the following specifications have been basically selected: The IEC CIM representing a domain-specific data model, the OPC UA being a communication standard with special respects to information modeling, and WSMO as an approach to realize the concept of Semantic Web Services. The next step specifies to develop both, a semantic information model (integration of CIM and OPC UA) and semantic services (integration of CIM and WSMO). These two components are then combined to obtain the target architecture, which allows precise descriptions of services as well as their combination and semi-automatic execution. Finally, the NFR are considered in order to evaluate the architecture based on simulated, representative use cases.







Methods and Concepts for Designing and Validating Smart Grid Systems


Book Description

Energy efficiency and low-carbon technologies are key contributors to curtailing the emission of greenhouse gases that continue to cause global warming. The efforts to reduce greenhouse gas emissions also strongly affect electrical power systems. Renewable sources, storage systems, and flexible loads provide new system controls, but power system operators and utilities have to deal with their fluctuating nature, limited storage capabilities, and typically higher infrastructure complexity with a growing number of heterogeneous components. In addition to the technological change of new components, the liberalization of energy markets and new regulatory rules bring contextual change that necessitates the restructuring of the design and operation of future energy systems. Sophisticated component design methods, intelligent information and communication architectures, automation and control concepts, new and advanced markets, as well as proper standards are necessary in order to manage the higher complexity of such intelligent power systems that form smart grids. Due to the considerably higher complexity of such cyber-physical energy systems, constituting the power system, automation, protection, information and communication technology (ICT), and system services, it is expected that the design and validation of smart-grid configurations will play a major role in future technology and system developments. However, an integrated approach for the design and evaluation of smart-grid configurations incorporating these diverse constituent parts remains evasive. The currently available validation approaches focus mainly on component-oriented methods. In order to guarantee a sustainable, affordable, and secure supply of electricity through the transition to a future smart grid with considerably higher complexity and innovation, new design, validation, and testing methods appropriate for cyber-physical systems are required. Therefore, this book summarizes recent research results and developments related to the design and validation of smart grid systems.




Smart Grids


Book Description

What exactly is smart grid? Why is it receiving so much attention? What are utilities, vendors, and regulators doing about it? Answering these questions and more, Smart Grids: Infrastructure, Technology, and Solutions gives readers a clearer understanding of the drivers and infrastructure of one of the most talked-about topics in the electric utility market—smart grid. This book brings together the knowledge and views of a vast array of experts and leaders in their respective fields. Key Features Describes the impetus for change in the electric utility industry Discusses the business drivers, benefits, and market outlook of the smart grid initiative Examines the technical framework of enabling technologies and smart solutions Identifies the role of technology developments and coordinated standards in smart grid, including various initiatives and organizations helping to drive the smart grid effort Presents both current technologies and forward-looking ideas on new technologies Discusses barriers and critical factors for a successful smart grid from a utility, regulatory, and consumer perspective Summarizes recent smart grid initiatives around the world Discusses the outlook of the drivers and technologies for the next-generation smart grid Smart grid is defined not in terms of what it is, but what it achieves and the benefits it brings to the utility, consumer, society, and environment. Exploring the current situation and future challenges, the book provides a global perspective on how the smart grid integrates twenty-first-century technology with the twentieth-century power grid. CRC Press Authors Speak Stuart Borlase speaks about his book. Watch the video




Populating a Linked Data Entity Name System


Book Description

Resource Description Framework (RDF) is a graph-based data model used to publish data as a Web of Linked Data. RDF is an emergent foundation for large-scale data integration, the problem of providing a unified view over multiple data sources. An Entity Name System (ENS) is a thesaurus for entities, and is a crucial component in a data integration architecture. Populating a Linked Data ENS is equivalent to solving an Artificial Intelligence problem called instance matching, which concerns identifying pairs of entities referring to the same underlying entity. This publication presents an instance matcher with 4 properties, namely automation, heterogeneity, scalability and domain independence. Automation is addressed by employing inexpensive but well-performing heuristics to automatically generate a training set, which is employed by other machine learning algorithms in the pipeline. Data-driven alignment algorithms are adapted to deal with structural heterogeneity in RDF graphs. Domain independence is established by actively avoiding prior assumptions about input domains, and through evaluations on 10 RDF test cases. The full system is scaled by implementing it on cloud infrastructure using MapReduce algorithms. Resource Description Framework (RDF) is a graph-based data model used to publish data as a Web of Linked Data. RDF is an emergent foundation for large-scale data integration, the problem of providing a unified view over multiple data sources. An Entity Name System (ENS) is a thesaurus for entities, and is a crucial component in a data integration architecture. Populating a Linked Data ENS is equivalent to solving an Artificial Intelligence problem called instance matching, which concerns identifying pairs of entities referring to the same underlying entity. This publication presents an instance matcher with 4 properties, namely automation, heterogeneity, scalability and domain independence. Automation is addressed by employing inexpensive but well-performing heuristics to automatically generate a training set, which is employed by other machine learning algorithms in the pipeline. Data-driven alignment algorithms are adapted to deal with structural heterogeneity in RDF graphs. Domain independence is established by actively avoiding prior assumptions about input domains, and through evaluations on 10 RDF test cases. The full system is scaled by implementing it on cloud infrastructure using MapReduce algorithms.




Advances in Ontology Design and Patterns


Book Description

The study of patterns in the context of ontology engineering for the semantic web was pioneered more than a decade ago by Blomqvist, Sandkuhl and Gangemi. Since then, this line of research has flourished and led to the development of ontology design patterns, knowledge patterns, and linked data patterns: the patterns as they are known by ontology designers, knowledge engineers, and linked data publishers, respectively. A key characteristic of those patterns is that they are modular and reusable solutions to recurrent problems in ontology engineering and linked data publishing. This book contains recent contributions which advance the state of the art on theory and use of ontology design patterns. The papers collected in this book cover a range of topics, from a method to instantiate content patterns, a proposal on how to document a content pattern, to a number of patterns emerging in ontology modeling in various situations.




Ontology Engineering with Ontology Design Patterns: Foundations and Applications


Book Description

The use of ontologies for data and knowledge organization has become ubiquitous in many data-intensive and knowledge-driven application areas, in science, industry, and the humanities. At the same time, ontology engineering best practices continue to evolve. In particular, modular ontology modeling based on ontology design patterns is establishing itself as an approach for creating versatile and extendable ontologies for data management and integration. This book is the very first comprehensive treatment of Ontology Engineering with Ontology Design Patterns. It contains both advanced and introductory material accessible for readers with only a minimal background in ontology modeling. Some introductory material is written in the style of tutorials, and specific chapters are devoted to examples and to applications. Other chapters convey the state of the art in research regarding ontology design patterns. The editors and the contributing authors include the leading contributors to the development of ontology-design-pattern-driven ontology engineering.




Querying a Web of Linked Data


Book Description

In recent years, an increasing number of organizations and individuals have contributed to the Semantic Web by publishing data according to the Linked Data principles. In addition, a significant body of Semantic Web research exists that studies various aspects of knowledge representation and automated reasoning over collections of such data. However, a challenge that is crucial for achieving the vision of a Semantic Web – but that has not yet been studied to a comparable extent – is to enable automated software agents to operate directly on decentralized Linked Data that is distributed over the WWW. In particular, fundamental questions related to querying this data on the WWW have received very limited research attention. This book contributes towards filling this gap by studying the foundations of declarative queries over Linked Data on the WWW. Our particular focus in this book are approaches to use the SPARQL query language and execute queries by traversing Linked Data live during the query execution process. More specifically, we first provide formal foundations to adapt SPARQL to the given context. Thereafter, we use an abstract machine model to formally show computational feasibility and related properties of the resulting types of SPARQL queries. Additionally, we investigate fundamental properties of applying the traversal-based approach to query execution that is tailored to the use case of querying Linked Data directly on the WWW.




Publishing and Consuming Linked Data


Book Description

This dissertation addresses several problems in the context of publishing and consuming Linked Data. It describes these problems from the perspectives of three stakeholders: the Linked Data provider, developer and scientist. The Linked Data provider is faced with impractical data re-use and costly Linked Data hosting solutions. Developers face difficulties in finding, navigating and using Linked Datasets. Scientists lack the resources and methods to evaluate their work on Linked Data at large. This dissertation presents a number of novel approaches that address these issues, such as: - The LOD Laundromat: a centralized service that re-publishes cleaned, queryable and structurally annotated Linked Datasets. In 2015 the Laundromat was awarded first prize in the Dutch national Linked Open Data competition, and third prize in the European equivalent; - SampLD: A relevance-based sampling algorithm that enables publishers to decrease Linked Data hosting costs; - YASGUI: A feature-rich query editor for accessing SPARQL endpoints; - LOD Lab: An evaluation paradigm that enables scientists to increase the breadth and scale of their Linked Data evaluations. This work provides a unique overview of problems related to publishing and consuming Linked Data. The novel approaches presented here improve the state-of-the-art for Linked Data publishers, developers and scientists, and are a step towards a web of Linked Data that is more accessible and technically scalable.




Reasoning Techniques for the Web of Data


Book Description

Linked Data publishing has brought about a novel “Web of Data”: a wealth of diverse, interlinked, structured data published on the Web. These Linked Datasets are described using the Semantic Web standards and are openly available to all, produced by governments, businesses, communities and academia alike. However, the heterogeneity of such data – in terms of how resources are described and identified – poses major challenges to potential consumers. Herein, we examine use cases for pragmatic, lightweight reasoning techniques that leverage Web vocabularies (described in RDFS and OWL) to better integrate large scale, diverse, Linked Data corpora. We take a test corpus of 1.1 billion RDF statements collected from 4 million RDF Web documents and analyse the use of RDFS and OWL therein. We then detail and evaluate scalable and distributed techniques for applying rule-based materialisation to translate data between different vocabularies, and to resolve coreferent resources that talk about the same thing. We show how such techniques can be made robust in the face of noisy and often impudent Web data. We also examine a use case for incorporating a PagerRank-style algorithm to rank the trustworthiness of facts produced by reasoning, subsequently using those ranks to fix formal contradictions in the data. All of our methods are validated against our real world, large scale, open domain, Linked Data evaluation corpus.