Conceptual Modeling - ER 2009


Book Description

This book constitutes the refereed proceedings of the 28th International Conference on Conceptual Modeling, ER 2009, held in Gramado, Brazil, in November 2009. The 31 revised full papers presented together with 18 demo papers were carefully reviewed and selected from 162 submissions. The papers are organized in topical sections on conceptual modeling, requirements engineering, query approaches, space and time modeling, schema matching and integration, application contexts, process and service modeling, and industrial session.




Handbook of Conceptual Modeling


Book Description

Conceptual modeling is about describing the semantics of software applications at a high level of abstraction in terms of structure, behavior, and user interaction. Embley and Thalheim start with a manifesto stating that the dream of developing information systems strictly by conceptual modeling – as expressed in the phrase “the model is the code” – is becoming reality. The subsequent contributions written by leading researchers in the field support the manifesto's assertions, showing not only how to abstractly model complex information systems but also how to formalize abstract specifications in ways that let developers complete programming tasks within the conceptual model itself. They are grouped into sections on programming with conceptual models, structure modeling, process modeling, user interface modeling, and special challenge areas such as conceptual geometric modeling, information integration, and biological conceptual modeling. The Handbook of Conceptual Modeling collects in a single volume many of the best conceptual-modeling ideas, techniques, and practices as well as the challenges that drive research in the field. Thus it is much more than a traditional handbook for advanced professionals, as it also provides both a firm foundation for the field of conceptual modeling, and points researchers and graduate students towards interesting challenges and paths for how to contribute to this fundamental field of computer science.




Linking Government Data


Book Description

Linking Government Data provides a practical approach to addressing common information management issues. The approaches taken are based on international standards of the World Wide Web Consortium. Linking Government Data gives both the costs and benefits of using linked data techniques with government data; describes how agencies can fulfill their missions with less cost; and recommends how intra-agency culture must change to allow public presentation of linked data. Case studies from early adopters of linked data approaches in international governments are presented in the last section of the book. Linking Government Data is designed as a professional book for those working in Semantic Web research and standards development, and for early adopters of Semantic Web standards and techniques. Enterprise architects, project managers and application developers in commercial, not-for-profit and government organizations concerned with scalability, flexibility and robustness of information management systems will also find this book valuable. Students focused on computer science and business management will also find value in this book.




Biological Knowledge Discovery Handbook


Book Description

The first comprehensive overview of preprocessing, mining, and postprocessing of biological data Molecular biology is undergoing exponential growth in both the volume and complexity of biological data and knowledge discovery offers the capacity to automate complex search and data analysis tasks. This book presents a vast overview of the most recent developments on techniques and approaches in the field of biological knowledge discovery and data mining (KDD) providing in-depth fundamental and technical field information on the most important topics encountered. Written by top experts, Biological Knowledge Discovery Handbook: Preprocessing, Mining, and Postprocessing of Biological Data covers the three main phases of knowledge discovery (data preprocessing, data processing also known as data mining and data postprocessing) and analyzes both verification systems and discovery systems. BIOLOGICAL DATA PREPROCESSING Part A: Biological Data Management Part B: Biological Data Modeling Part C: Biological Feature Extraction Part D Biological Feature Selection BIOLOGICAL DATA MINING Part E: Regression Analysis of Biological Data Part F Biological Data Clustering Part G: Biological Data Classification Part H: Association Rules Learning from Biological Data Part I: Text Mining and Application to Biological Data Part J: High-Performance Computing for Biological Data Mining Combining sound theory with practical applications in molecular biology, Biological Knowledge Discovery Handbook is ideal for courses in bioinformatics and biological KDD as well as for practitioners and professional researchers in computer science, life science, and mathematics.




Formal Ontology in Information Systems


Book Description

The complex information systems which have evolved in recent decades rely on robust and coherent representations in order to function. Such representations and associated reasoning techniques constitute the modern discipline of formal ontology, which is now applied to fields such as artificial intelligence, computational linguistics, bioinformatics, GIS, conceptual modeling, knowledge engineering, information retrieval, and the semantic web. Ontologies are increasingly employed in a number of complex real-world application domains. For instance, in biology and medicine, more and more principle-based ontologies are being developed for the description of biological and biomedical phenomena. To be effective, such ontologies must work well together, and as they become more widely used, achieving coordinated development presents a significant challenge. This book presents collected articles from the 7th International Conference on Formal Ontologies (FOIS), held in Graz, Austria, in July 2012.FOIS is a forum which brings together representatives of all major communities involved in the development and application of ontologies to explore both theoretical issues and concrete applications in the field. The book is organized in eight sections, each of which deals with the ontological aspects of: bioinformatics; physical entities; artifacts and human resources; ontology evaluation; language and social relations; time and events; representation and the methodological aspects of ontological engineering. Providing a current overview of developments in formal ontology, this book will be of interest to all those whose work involves the application of ontologies, and to anybody wishing to keep abreast of advances in the field.




Mashups


Book Description

Mashups have emerged as an innovative software trend that re-interprets existing Web building blocks and leverages the composition of individual components in novel, value-adding ways. Additional appeal also derives from their potential to turn non-programmers into developers. Daniel and Matera have written the first comprehensive reference work for mashups. They systematically cover the main concepts and techniques underlying mashup design and development, the synergies among the models involved at different levels of abstraction and the way models materialize into composition paradigms and architectures of corresponding development tools. The book deliberately takes a balanced approach, combining a scientific perspective on the topic with an in-depth view on relevant technologies. To this end, the first part of the book introduces the theoretical and technological foundations for designing and developing mashups, as well as for designing tools that can aid mashup development. The second part then focuses more specifically on various aspects of mashups. It discusses a set of core component technologies, core approaches and architectural patterns, with a particular emphasis on tool-aided mashup development exploiting model-driven architectures. Development processes for mashups are also discussed and special attention is paid to composition paradigms for the end-user development of mashups and quality issues. Overall, the book is of interest to a wide range of readers. Students, lecturers, and researchers will find a comprehensive overview of core concepts and technological foundations for mashup implementation and composition. Even without low-level coding details, practitioners like software architects will find guidance on key implementation concepts, architectural patterns and development tools and approaches. A related website provides additional teaching material which can be used either as part of a course or for self study.




Linguistic Refactoring of Business Process Models


Book Description

In the past decades, organizations had to face numerous challenges due to intensifying globalization, shorter innovation cycles and growing IT support. Business process management is seen as a comprehensive approach to address these challenges. For this purpose, business process models are increasingly utilized to document and redesign relevant parts of the organization's business operations. Since organizations tend to have a huge number of such models, analysis techniques are required that ensure the quality of these process models in an automatic fashion. The goal of this doctoral thesis is the development of model refactoring techniques by integrating and applying concepts from the three main branches of theoretical linguistics: syntax, semantics, and pragmatics. The syntactical refactoring technique addresses linguistic issues that arise by expressing process behavior with natural language. The semantic refactoring technique reworks terminology with overlapping and synonymous meaning. The pragmatic refactoring technique provides recommendations for incompletely specified process models. All of the presented techniques have been evaluated with real-world process model repositories from various industries to demonstrate their applicability and efficiency.




Advances in Conceptual Modeling


Book Description

This book constitutes the refereed proceedings of five workshops and a symposium, held at the 36th International Conference on Conceptual Modeling, ER 2017, in Valencia, Spain in November 2017. The 21 revised full papers were carefully reviewed and selected out of 47 submissions to the following events: AHA 2017 - 3rd International Workshop on Modeling for Ambient Assistance and Healthy Ageing MoBiD 2017 - 6th International Workshop on Modeling and Management of Big Data MREBA 2017 - 4th International Workshop on Conceptual Modeling in Requirements and Business Analysis OntoCom 2017 - 5th International Workshop on Ontologies and Conceptual Modeling QMMQ 2017 - 4th Workshop on Quality of Models and Models of Quality




Conquering Complexity


Book Description

Software has long been perceived as complex, at least within Software Engineering circles. We have been living in a recognised state of crisis since the first NATO Software Engineering conference in 1968. Time and again we have been proven unable to engineer reliable software as easily/cheaply as we imagined. Cost overruns and expensive failures are the norm. The problem is fundamentally one of complexity: software is fundamentally complex because it must be precise. Problems that appear to be specified quite easily in plain language become far more complex when written in a more formal notation, such as computer code. Comparisons with other engineering disciplines are deceptive. One cannot easily increase the factor of safety of software in the same way that one could in building a steel structure, for example. Software is typically built assuming perfection, often without adequate safety nets in case the unthinkable happens. In such circumstances it should not be surprising to find out that (seemingly) minor errors have the potential to cause entire software systems to collapse. The goal of this book is to uncover techniques that will aid in overcoming complexity and enable us to produce reliable, dependable computer systems that will operate as intended, and yet are produced on-time, in budget, and are evolvable, both over time and at run time. We hope that the contributions in this book will aid in understanding the nature of software complexity and provide guidance for the control or avoidance of complexity in the engineering of complex software systems.




Conceptual Modeling - ER 2013


Book Description

This book constitutes the refereed proceedings of the 32nd International Conference on Conceptual Modeling, ER 2013, held in Hong Kong, China, in November 2013. The 23 full and 17 short papers presented were carefully reviewed and selected from 148 abstracts and 126 full papers submissions. The papers are organized in topical sections on modeling and reasoning, fundamentals of conceptual modeling, business process modeling, network modeling, data semantics, security and optimization, ontology-based modeling, searching and mining, conceptual modeling and applications, demonstration papers.