Purpose: New strategies for developing inhibitors of Mycobacterium tuberculosis (Mtb) are required in order to identify the next generation of tuberculosis (TB) drugs. Our approach leverages the integration of intensive data mining and curation and computational approaches, including cheminformatics combined with bioinformatics, to suggest biological targets and their small molecule modulators. Methods: We now describe an approach that uses the TBCyc pathway and genome database, the Collaborative Drug Discovery database of molecules with activity against Mtb and their associated targets, a 3D pharmacophore approach and Bayesian models of TB activity in order to select pathways and metabolites and ultimately prioritize molecules that may be acting as metabolite mimics and exhibit activity against TB. Results: In this study we combined the TB cheminformatics and pathways databases that enabled us to computationally search >80,000 vendor available molecules and ultimately test 23 compounds in vitro that resulted in two compounds (N-(2- furylmethyl)-N’-[(5-nitro-3-thienyl)carbonyl]thiourea and N-[(5-nitro-3-thienyl)carbonyl]- N’-(2-thienylmethyl)thiourea) proposed as mimics of D-fructose 1,6 bisphosphate, (MIC N’-(2-thienylmethyl)thiourea) proposed as mimics of D-fructose 1,6 bisphosphate, (MIC of 20 and 40 g/ml, respectively). Conclusion: This is a simple yet novel approach that has the potential to identify inhibitors of bacterial growth as illustrated by compounds identified in this study that have activity against Mtb.
Before a drug can be made available to the general public, its effectiveness has to be experimentally evaluated. Experiments that involve human subjects are called Clinical Investigations (CIs). Since human subjects are involved, procedures for CIs are elaborated so that data required for validating the drug can be collected while ensuring the safety of subjects. Moreover, CIs are heavily regulated by public agencies, such as the Food and Drug Administration (FDA). Violations of regulations or deviations from procedures should be avoided as they may incur heavy penalties and more importantly may compromise the health of subjects. However, CIs are prone to human error, since CIs are carried out by the study team, which might be overloaded with other tasks, such as hospital and/or pharmacy duties, other trials, etc. In order to avoid discrepancies, we propose developing an automated assistant for helping all the parties to correctly carry out CIs as well as to detect and prevent discrepancies as early as possible. This way the proposed automated assistant would minimize error, and therefore increase the safety of the involved subjects. This paper takes the first steps towards that direction. In particular, we propose a model for collaborative systems with explicit time, called Timed Local State Transition Systems (TLSTS), and argue that it can be used for specifying procedures and regulations for CIs, which mention time explicitly. Finally we show how to implement a TLSTS specification using Maude, an existing computational tool based on rewriting.
Keywords: Formal Methods, Timed Collaborative Systems, Clinical Investigations
Today’s Internet interdomain routing protocol, the Border Gateway Protocol (BGP), is increasingly complicated and fragile due to policy misconfig- urations by individual autonomous systems (ASes). These misconfigurations are often difficult to manually diagnose beyond a small number of nodes due to the state explosion problem. To aid the diagnosis of potential anomalies, researchers have developed various formal models and analysis tools. However, these tech- niques do not scale well or do not cover the full set of anomalies. Current tech- niques use oversimplified BGP models that capture either anomalies within or across ASes, but not the interactions between the two. To address these limita- tions, we propose a novel approach that reduces network size prior to analysis, while preserving crucial BGP correctness properties. Using Maude, we have de- veloped a toolkit that takes as input a network instance consisting of ASes and their policy configurations, and then performs formal analysis on the reduced instance for safety (protocol convergence). Our results show that our reduction- based analysis allows us to analyze significantly larger network instances at low reduction overhead.
Inter-domain routing stitches the disparate parts of the Internet together, making protocol stability a critical issue to both researchers and practitioners. Yet, researchers create safety proofs and counter-examples by hand, and build simulators and prototypes to explore protocol dynamics. Similarly, network operators analyze their router configurations manually, or using home-grown tools. In this paper, we present a comprehensive toolkit for analyzing and implementing routing policies, ranging from high-level guidelines to specific router configurations. Our Formally Safe Routing (FSR) toolkit performs all of these functions from the same algebraic representation of routing policy. We show that routing algebra has a natural translation to both integer constraints (to perform safety analysis with SMT solvers) and declarative programs (to generate distributed imple- mentations). Our extensive experiments with realistic topologies and policies show how FSR can detect problems in an AS’s iBGP configuration, prove sufficient conditions for BGP safety, and empirically evaluate convergence time.
This chapter contains information on the fol- lowing concepts: Hierarchical Modeling and Reasoning in Cyber-Physical Systems, Adaptive Middleware for Cyber-Physical Spaces, Enabling Scalability in Cyber-Physical Spaces, Dependabil- ity in Sentient Spaces, and Privacy in Pervasive Spaces.
In this paper, we present a formal model, named PobSAM (Policy-based Self- Adaptive Model), for developing and modeling self-adaptive evolving systems. In this model, policies are used as a mechanism to direct and adapt the behavior of self-adaptive systems. A PobSAM model is a collection of autonomous man- agers and managed actors. The managed actors are dedicated to the functional behavior while the autonomous managers govern the behavior of managed actors by enforcing suitable policies. A manager has a set of configurations including two types of policies: governing policies and adaptation policies. To adapt sys- tem behavior in response to the changes, the managers switch among different configurations. We employ the combination of an algebraic formalism and an actor-based model to specify this model formally. Managed actors are expressed by an actor model. Managers are modeled as meta-actors whose configurations are described using a multi-sorted algebra called CA. We provide an opera- tional semantics for PobSAM using labeled transition systems. Furthermore, we provide behavioral equivalence of different sorts of CA in terms of splitting bisimulation and prioritized splitting bisimulation. Equivalent managers send the same set of messages to the actors. Using our behavioral equivalence theory, we can prove that the overall behavior of the system is preserved by substituting a manager by an equivalent one.
AnalyzingBorderGatewayProtocol(BGP)instancesisacrucialstep in the design and implementation of safe BGP systems. Today, the analysis is a manual and tedious process. Researchers study the instances by manually con- structing execution sequences, hoping to either identify an oscillation or show that the instance is safe by exhaustively examining all possible sequences. We propose to automate the analysis by using Maude, a tool based on rewriting logic. We have developed a library specifying a generalized path vector protocol, and methods to instantiate the library with customized routing policies. Protocols can be analyzed automatically by Maude, once users provide specifications of the network topology and routing policies. Using our Maude library, protocols or policies can be easily specified and checked for problems. To validate our ap- proach, we performed safety analysis of well-known BGP instances and actual routing configurations.
Resource-limited mobile embedded systems can benefit greatly from dynamic adaptation of sys- tem parameters. We propose a novel approach that employs iterative tuning using light-weight formal verification at runtime with feedback for dynamic adaptation. One objective of this ap- proach is to enable tradeoff analysis across multiple layers (e.g., application, middleware, OS) and predict the possible property violations as the system evolves dynamically over time. Specif- ically, an executable formal specification is developed for each layer of the mobile system under consideration. The formal specification is then analyzed using statistical property checking and statistical quantitative analysis, to determine the impact of various resource management policies for achieving desired timing/QoS properties. Integration of formal analysis with dynamic behav- ior from system execution results in a feedback loop that enables model refinement and further optimization of policies and parameters. We demonstrate the applicability of this approach to the adaptive provisioning of resource-limited distributed real-time systems using a mobile multimedia case study.
Keywords: Cross-Layer Optimization, Iterative Tuning, Formal Methods
Antibiotic discovery aimed at conventional targets such as proteins and nucleic acids faces challenges from mutations and antibiotic resistance. Small molecule metabolites, however, can be considered resistant to change, as they do not undergo rapid mutations. Developing analogs or scavengers of essential microbial metabolites as antibiotics is a promising strategy that can delay drug resistance. The objective of this work was to identify microbial metabolites that are most suitable targets for antimicrobial discovery. We performed extensive literature mining and systems level pathway analysis to identify bacterial metabolites that fulfill the criteria for drug targets. The BioCyc interactive metabolic pathway maps and Pathway Tools software were used to corroborate our finding. We identified ten metabolites as potential candidates for developing novel antibiotics. These metabolites are Lipid II, meso-diaminopimelate, pantothenate, shikimate, biotin, L-aspartyl-4-phosphate, dTDP-α-L-rhamnose, UDP-D- galacto-1,4-furanose, des-N-acetyl mycothiol, and Siroheme. The article describes the selection criteria, analysis of metabolic pathways, and the potential role of each of the ten metabolites in therapeutic intervention as broad- spectrum antibiotics with emphasis on M. tuberculosis. Our study revealed previously unexplored targets along with metabolites that are well established in antibiotic discovery. Identification of established metabolites strengthen our analyses while the newly discovered metabolites could lead to novel antimicrobials.
A distributed logical framework designed to serve as a declarative semantic foundation for Networked Cyber-Physical Systems provides notions of facts and goals that include interactions with the environment via external goal requests, observations that generate facts, and actions that achieve goals. Reasoning rules are built on a partially ordered knowledge-sharing model for loosely coupled distributed computing. The logic supports reasoning in the context of dynamically changing facts and system goals. It can be used both to program systems and to reason about possible scenarios and emerging properties. The underlying reasoning framework is specified in terms of constraints that must be satisfied, making it very general and flexible. Inference rules for an instantiation to a specific local logic (Horn clause logic) are given as a concrete example. The key novel features are illustrated with snippets from an existing application—a theory for self-organizing robots performing a distributed surveillance task. Traditional properties of logical inference and computation are reformulated in this novel context, and related to features of system design and execution. Proofs are outlined for key properties corresponding to soundness, completeness, and termination. Finally, the framework is compared to other formal systems addressing concurrent/distributed computation.
Keywords: Distributed declarative logic, partially ordered knowledge, networked cyber-physical systems.
Document Logic is a simple yet powerful framework to infer risks in business processes. We focus on flows of documents and build a set of inference rules based on document authenticity and a simple trust model. We have built a prototype of a system that checks document authenticity in Maude, an implementation of rewriting logic. Rewriting logic is expressive and general enough to define other specialized logics, like Document Logic. In our framework, a business process is modeled as a transition system. Our prototype takes a business process and an undesired situation as its input and outputs all the possible risks in the business process that could lead to the undesired situation.
An emerging generation of mission-critical systems employs distributed, dynamically reconfigurable open architectures. These sys- tems may include a variety of devices that sense and affect their envi- ronment and the configuration of the system itself. We call such systems Networked Cyber-Physical Systems (NCPS). NCPS can provide complex, situation-aware, and often critical services in applications such as dis- tributed sensing and surveillance, crisis response, self-assembling struc- tures or systems, networked satellite and unmanned vehicle missions, or distributed critical infrastructure monitoring and control. In this paper we lay out research directions centered around a new paradigm for the design of NCPS based on a notion of software frac- tionation that we are currently exploring which can serve as the basis for a new generation of runtime assurance techniques. The idea of software fractionation is inspired by and complementary to hardware fractiona- tion — the basis for the fractionated satellites of DARPA’s F6 program. Fractionated software has the potential of leading to software that is more robust, leveraging both diversity and redundancy. It raises the level of abstraction at which assurance techniques are applied. We specifically propose research in just-in-time verification and validation techniques, which are agile — adapting to changing situations and requirements, and efficient — focusing on properties of immediate concern in the context of locally reachable states, thus largely avoiding the state space explosion problem. We propose an underlying reflective architecture that main- tains models of itself, the environment, and the mission that is key for adaptation, verification, and validation.
To explore the role of proteases in pathogenesis and as potential drug targets we need to elucidate their function and effect on biological networks. In this paper, we describe the application of Pathway Logic (PL) (http://pl.csl.sri.com/) to the symbolic modeling of the interaction networks of proteases of Gram-positive bacteria and the use of Pathway Logic Assistant tool to browse and query these models. Pathway Logic is a systems biology approach to biological processes as integrated systems rather than isolated parts based on formal methods and rewriting logic. These models are developed using Maude, a formal language and tool set based on rewriting logic. We show how this approach can be used to represent and analyze systems at multiple levels of details. The Pathway Logic Assistant tool (PLA) enables us to identify key proteases and regulatory molecules – ‘choke points’ by comparing different pathways or networks within and across species and to predict how these molecules, if inhibited or avoided would affect the pathway or network.
The xTune framework employs iterative tuning using light- weight formal verification at runtime with feedback for dynamic adapta- tion of mobile real-time embedded systems. To enable trade-off analysis across multiple layers of abstraction and predict the possible property violations as the system evolves dynamically over time, an executable formal specification is developed for each layer of the system under con- sideration. The formal specification is then analyzed using statistical analysis, to determine the impact of various policies for achieving a vari- ety of end-to-end properties in a quantifiable manner. The integration of formal analysis with dynamic behavior from system execution results in a feedback loop that enables model refinement and further optimization of policies and parameters. Finally, we propose a composition method for coordinated interaction of optimizers at different abstraction layers. The core idea of our approach is that each participating optimizer can restrict its own parameters and exchange refined parameters with its as- sociated layers. We also introduce sample application domains for future research directions.
We are witnessing the growing menace of both increasing cases of drug-sensitive and drug-resistant Mycobac- terium tuberculosis strains and the challenge to produce the first new tuberculosis (TB) drug in well over 40 years. The TB community, having invested in extensive high- throughput screening efforts, is faced with the question of how to optimally leverage these data to move from a hit to a lead to a clinical candidate and potentially, a new drug. Complementing this approach, yet conducted on a much smaller scale, cheminformatic techniques have been leveraged and are examined in this review. We suggest that these computational approaches should be optimally integrated within a workflow with experi- mental approaches to accelerate TB drug discovery.
This paper introduces some novel features of Maude 2.6 focusing on the variants of a term. Given an equational theory (Σ, Ax ∪ E ), the E , Ax-variants of a term t are understood as the set of all pairs consisting of a substitution σ and the E , Ax-canonical form of t σ. The equational theory Ax ∪ E has the finite variant property iff, for each term, there is a finite and complete set of most general variants. We have added support in Maude 2.6 for: (i) order-sorted unification modulo associativity, commutativity, and identity, (ii) variant generation, (iii) order-sorted unification modulo finite variant theories, and (iv) narrowing- based symbolic reachability modulo finite variant theories. We also explain how these features have a number of interesting applications in areas such as unification theory, cryptographic protocol verification, business processes, and proofs of termination, confluence, and coherence.
Keywords: Rewriting logic, narrowing, unification, variants
Situation- and resource-aware security is essential for the process automation systems, composed of networked entities with sen- sors and actuators, that monitor and control the national critical in- frastructure. However, security cannot be addressed at a single layer because of the inherent dependencies and tradeoffs among crosscutting concerns. Techniques applied at one layer to improve security affect se- curity, timing, and power consumption at other layers. This paper argues for an integrated treatment of security across multiple layers of abstrac- tion (application, middleware, operating system including network stack, and hardware). An important step in realizing this integrated treatment of situation- and resource-aware security is first understanding the cross- layer interactions between security policies and then exploiting these interactions to design efficient adaptation strategies (i) to balance secu- rity, quality of service, and energy needs, and (ii) to maximize system availability. We propose a novel approach that employs a compositional method within an iterative tuning framework based on lightweight formal methods with dynamic adaptation.
Mobility and intermittent connectivity inject inaccuracy in determining group membership and exacerbate the time re- quired to agree on the current group membership. In this paper, we present a group membership service based on par- tial member connectivity that allows members to agree on a shared approximation of the group membership based on lo- cal neighborhood connectivity. In particular, the consistency needs provided by the application determine the degree of consistency of the membership service and allow the mem- bership service to tailor the neighborhood service in terms of fidelity ratio and time detection period.
Networked Cyber-Physical Systems (NCPS) present many challenges that are not suitably addressed by existing distributed computing paradigms. They must be reactive and maintain an overall situation awareness that emerges from partial distributed knowledge. They must achieve system goals through local, asynchronous actions, using (distributed) control loops through which the environment provides essential feedback. Typical NCPS are open, dynamic, and heterogeneous in many dimensions, and often need to be rapidly instantiated and deployed for a given mission. To address these challenges, we pursue a declarative approach to provide an abstraction from the high complexity of NCPS and avoid error-prone and time-consuming low-level programming. A longer-term goal is to develop a distributed computational and logical foundation that supports a wide spectrum of system operation between autonomy and cooperation to adapt to resource constraints, in particular to limitations of computational, energy, and networking resources. Here, we present first steps toward a logical framework for NCPS that combines distributed reasoning and asynchronous control in space and time. The logical framework is based on partially ordered knowledge sharing, a distributed computing paradigm for loosely coupled systems that does not require continuous network connectivity. We illustrate our approach with a simulation prototype of our logical framework in the context of networked mobile robot teams that operate in an abstract instrumented cyber-physical space with sensors.
Networked Cyber-Physical Systems (NCPS) present many challenges that are not suitably addressed by existing distributed computing paradigms. They must be reactive and maintain an overall situation awareness that emerges from partial distributed knowledge. They must achieve system goals through local, asynchronous actions, using (distributed) control loops through which the environment provides essential feedback. Typical NCPS are open, dynamic, and heterogeneous in many dimensions, and often need to be rapidly instantiated and deployed for a given mission. To address these challenges, we pursue a declarative approach to provide an abstraction from the high complexity of NCPS and avoid error-prone and time-consuming low-level programming. A longer-term goal is to develop a distributed computational and logical foundation that supports a wide spectrum of system operation between autonomy and cooperation to adapt to resource constraints, in particular to limitations of computational, energy, and networking resources. Here, we present first steps toward a logical framework for NCPS that combines distributed reasoning and asynchronous control in space and time. The logical framework is based on partially ordered knowledge sharing, a distributed computing paradigm for loosely coupled systems that does not require continuous network connectivity. We illustrate our approach with a simulation prototype of our logical framework in the context of networked mobile robot teams that operate in an abstract instrumented cyber-physical space with sensors.
Pathway Logic (PL) is an approach to modeling and analysis of biological processes based on rewriting logic. This chapter describes the use of PL to model signal transduction processes. It begins with a general discussion of Symbolic Systems Biology, followed by some background on rewriting logic and executable specifications. Finally, representation and analysis of a small model of Ras and Raf activation is presented in some detail.
Sleep disorders affect 70 million Americans, an estimate that is likely an underestimate, based on recent data. Though sleep is considered an essential behavior, the biological function of sleep is not known, which has limited the development of safe and effective therapies. We hypothesize that sleep serves some sort of restorative function that affects the entire organism. We propose to model activities associated with sleep using a “bottom-up” approach that utilizes computational modeling to link sleep-related protein activities. In this paper, we show how data from our investigation of proteins correlate to sleep-wake in the cortex of rats. We believe that understanding the underpinnings of sleep at all levels of the body’s organizational hierarchy holds great promise for the future of neuroergonomics research and practice.
Keywords: Sleep-wake behavior, biological timing, cellular underpinnings of sleep, interactions across brain regions, organ-organ interactions, protein expression, sleep deprivation, neuroergonomics, health maintenance.
Three models of coordination—Reo, Actors-Roles-Coordinators (ARC), and Reflective Russian Dolls (RRD)—are compared and contrasted according to a set of coordination features. Mappings between their semantic models are defined. Use of the models is illustrated by a small case study.
Keywords: coordination model, semantics, Reo, Actor, Role, Reflective Russian Dolls
Document Logic is a simple yet powerful framework to infer risks in business processes. We focus on flows of documents and build a set of inference rules based on document authenticity and a simple trust model. We have built a prototype of a system that checks document authenticity in Maude. Maude is an implementation of rewriting logic. Rewriting logic is expressive and general enough to define other specialized logics, like Document Logic. In our framework, a business process is modeled as a transition system. Our prototype takes a business process and an undesired situation as its input and outputs all the possible risks in the business process.
Maude is a high-performance reflective language and system supporting both equational and rewriting logic specification and programming for a wide range of applications, and has a relatively large worldwide user and open-source developer base. This paper introduces novel features of Maude 2.4 including support for unification and narrowing. Unification is supported in Core Maude, the core rewriting engine of Maude, with commands and metalevel functions for order-sorted unification modulo some frequently occurring equational axioms. Narrowing is currently supported in its Full Maude extension. We also give a brief summary of the most important features of Maude 2.4 that were not part of Maude 2.0 and earlier releases. These features include communication with external objects, a new implementation of its module algebra, and new predefined libraries. We also review some new Maude applications.
In this paper we present the Pathway Logic System from the point of view of how a bench biologist might use it to understand experimental results, and develop theories and testable hypotheses about the system they are studying.
In this paper, we present a formal model named PobSAM (Policy-based Self-Adaptive Model) for modeling self-adaptive systems. In this model, policies are used as a mechanism to direct and adapt the behavior of self-adaptive systems. A PobSAM model consists of a set of self-managed modules(SMM). An SMM is a collection of autonomous managers and managed actors. Managed actors are dedicated to functional behavior while autonomous managers govern the behavior of managed actors by enforcing suitable policies. To adapt SMM behavior in response to changes, policies governing an SMM are adjusted, i.e., dynamic policies are used to govern and adapt system behavior. We employ the combination of an algebraic formalism and an actor-based model to specify this model formally. Managers are modeled as meta-actors whose policies are described using an algebra. Managed actors are expressed by an actor model. Furthermore, we provide an operational semantics for PobSAM described using labeled transition systems.
This paper presents a graphical user interface (GUI) for the Maude-NPA, a crypto protocol analysis tool that takes into account algebraic properties of cryptosystems not supported by other tools, such as cancellation of encryption and decryption, Abelian groups (including exclusive or), and modular exponentiation. Maude- NPA has a theoretical basis in rewriting logic, unification and narrowing, and performs backwards search from a final attack state to determine whether or not it is reachable from an initial state. The GUI animates the Maude-NPA verification process, displaying the complete search tree and allowing users to display graphical representations of final and intermediate nodes of the search tree. One of the most interesting points of this work is that our GUI has been developed using the framework for declarative graphical interaction associated to Maude that includes IOP, IMaude and JLambda. This framework facilitates the interaction and the interoperation between formal reasoning tools (Maude-NPA in our case) and allows Maude to communicate easily with other tools. Keywords: Graphical user interface, Maude-NPA, IOP, IMaude, JLambda, Maude
Background: Cancer is a heterogeneous disease resulting from the accumulation of genetic defects that negatively impact control of cell division, motility, adhesion and apoptosis. Deregulation in signaling along the EgfR-MAPK pathway is common in breast cancer, though the manner in which deregulation occurs varies between both individuals and cancer subtypes. Results: We were interested in identifying subnetworks within the EgfR-MAPK pathway that are similarly deregulated across subsets of breast cancers. To that end, we mapped genomic, transcriptional and proteomic profiles for 30 breast cancer cell lines onto a curated Pathway Logic symbolic systems model of EgfR-MAPK signaling. This model was composed of 539 molecular states and 396 rules governing signaling between active states. We analyzed these models and identified several subtype-specific subnetworks, including one that suggested Pak1 is particularly important in regulating the MAPK cascade when it is over-expressed. We hypothesized that Pak1 over-expressing cell lines would have increased sensitivity to Mek inhibitors. We tested this experimentally by measuring quantitative responses of 20 breast cancer cell lines to three Mek inhibitors. We found that Pak1 over-expressing luminal breast cancer cell lines are significantly more sensitive to Mek inhibition compared to those that express Pak1 at low levels. This indicates that Pak1 over-expression may be a useful clinical marker to identify patient populations that may be sensitive to Mek inhibitors. Conclusions: All together, our results support the utility of symbolic system biology models for identification of therapeutic approaches that will be effective against breast cancer subsets. ￼￼￼￼
We present a discrete formal model of the central pattern generator (CPG) located in the buccal ganglia of the sea slug Aplysia that is responsible for mediating the rhythmic movements of its foregut during feeding. Our starting point is the continuous dynamical model for pattern generation underlying fictive feeding in Aplysia proposed by Baxter et. al. CMSB 2006. The discrete model is obtained as a composition of discrete models of ten individual neurons in the CPG. The individual neurons are inter-connected through excitatory and inhibitory synaptic connections and electric connections. We used Symbolic Analysis Laboratory (SAL) to formally build the model and analyzed it using the SAL model checkers. Using abstract discrete models of the individual neurons helps in understanding the buccal motor programs generated by the network in terms of the network connection topology. It also eliminates the need for detailed knowledge of the unknown parameters in the continuous model of Baxter et. al. CMSB 2006.
This paper presents a methodology for the formal specification of agent-object oriented programs. Agent-object oriented programming is a programming paradigm that integrates both agent-oriented programming and object-oriented programming (e.g, see Jack, Jadex). Even if there are several formal specification frameworks and methodologies both for agent-oriented programs and for object-oriented programs, nothing exists for agent-object programming. In this paper, the rewriting logic language Maude has been used as a formal framework. This opens to us the possibility of using the wide-spectrum of formal modeling and reasoning supported by Maude: analyzing agent-object programs by means of execution, search, model checking, or theorem proving to verify properties of a given program such as goal satisfaction and plan termination.
Keywords: Formal Frameworks and Methodologies for Collaboration, Intelligent and Autonomous Agents in Collaboration
Pathway Logic (PL) is an approach to modeling and analysis of biological processes based on rewriting logic. This tutorial describes the use of PL to model signal transduction processes. It begins with a general discussion of Symbolic Systems Biology, followed by some background on rewriting logic and signal transduction. The representation and analysis of a small model Ras and Raf activation is presented in some detail. This is followed by discussion of a curated model of early signaling events in response to Epidermal Growth Factor stimulation.
Keywords: Symbolic systems biology, rewriting logic, signal transduction, Pathway Logic, Epidermal Growth Factor signaling
We give an overview of algorithms that we have been developing in the DARPA Disruption-Tolerant Networking program, which aims at improving communication in networks with intermittent and episodic connectivity. Thanks to the use of network caching, this can be accomplished without the need for a simultaneous end-to-end path that is required by traditional Internet and mobile ad-hoc network (MANET) protocols. We employ a disciplined two-level approach that clearly distinguishes the dissemination of application content from the dissemination of network-related knowledge, each of which can be supported by different algorithms. Specifically, we present probabilisitc reflection, a single-message protocol enabling the dissemination of knowledge in strongly disrupted networks. For content dissemination, we present two approaches, namely a symbolic planning algorithm that exploits partially predictable temporal behavior, and a distributed and disruption-tolerant reinforcement learning algorithm that takes into account feedback about past performance.
For asynchronous and open distributed systems, dynamicity, openness, and stringent quality of service requirements post grand challenges to model and develop such systems. The Actor-Role-Coordinator (ARC) model was previously proposed to address these challenges. The role concept in the model attends to the dynamicity and openness issues by providing abstractions of actor behaviors. In this paper, we focus on coordinating actors and roles through message manipulations based on synchronous event-based timing constraints. In addition, different types of timing constraints are generalized into a semiring-based constraint structure; and the all-pairs extremal paths algorithm on closed semirings is applied to derive the most stringent constraints which are logical implications of the original set of constraints. The derived implicit constraints are further used to test constraint inclusions and decide intersections between feasible regions of timing constraint sets. The integration of the ARC model and the semiring-based timing constraint models is prototyped through Maude, a rewriting logic language. We further use the approach to solve the Restaurant for Dining Philosophers problem and illustrate the expressiveness of the ARC and the semiring-based timing constraint models for exogenous and composable coordination of open systems.
Resource limited DRE (Distributed Real-time Embedded) systems can benefit greatly from dynamic adaptation of system parameters. We propose a novel approach that employs iterative tuning using light-weight, on-the-fly formal verification with feedback for dynamic adaptation. One ob jective of this approach is to enable system designers to analyze designs in order to study design tradeoffs across multiple layers (for example, application, middleware, operating system) and predict the possible property violations as the system evolves dynamically over time. Specifically, an executable formal specification is developed for each layer of the distributed system under consideration. The formal specification is then analyzed using statistical model checking and statistical quantitative analysis, to determine the impact of various resource management policies for achieving desired end-to-end timing/QoS properties. Finally, integration of formal analysis with dynamic behavior from system execution will result in a feedback loop that enables model refinement and further optimization of policies and parameters. We demonstrate the applicability of this approach to the adaptive provisioning of resource-limited distributed real-time systems using a multi-mode multimedia case study.
Keywords: Iterative System Tuning, Formal Modeling, Statistical Formal Methods, System Realization, Cross-layer Timing/QoS/resource Provisioning for Distributed Systems
We present a novel approach, based on probabilistic formal methods, to developing cross-layer resource optimization policies for resource limited distributed systems. One ob jective of this approach is to enable system designers to analyze designs in order to study design trade- offs and predict the possible property violations as the system evolves dynamically over time. Specifically, an executable formal specification is developed for each layer under consideration (for example, application, middleware, operating system). The formal specification is then analyzed using statistical model checking and statistical quantitative analysis, to determine the impact of various resource management policies for achieving desired end-to-end QoS properties. We describe how existing statistical approaches have been adapted and improved to provide analyses of given cross-layered optimization policies with quantifiable confidence. The ideas are tested in a multi-mode multi-media case study. Experiments from both theoretical analysis and Monte-Carlo simulation followed by statistical analyses demonstrate the applicability of this approach to the design of resource-limited distributed systems.
Keywords: Probabilistic Formal Methods, Statistical Analysis, Cross- layer Optimization, Resource Management
This paper proposes a formal framework and architecture for specification and analysis of interactive agents. The framework can be used to explore the design space, study features of different points in the design space, and to develop executable specifications of specific agents and study their interactions with the environment. A long term goal is development of reasoning principles specialized to different regions of the design space.
Keywords: interaction, coordination, distributed object reflection, policy, autonomy
Three models of coordination—Reo, Actors-Roles-Coordinators (ARC), and Reflective Russian Dolls (RRD)—are compared and contrasted according to a set of coordination features. Mappings between their semantic models are defined. Use of the models is illustrated by a small case study.
Keywords: coordination model, semantics, Reo, Actor, Role, Reflective Russian Dolls
This book gives a comprehensive account of Maude, a language and system based on rewriting logic. Many examples are used throughout the book to illustrate the main ideas and features of Maude, and its many possible uses. Maude modules are rewrite theories. Computation with such modules is effi- cient deduction by rewriting. Because of its logical basis and its initial model semantics, a Maude module defines a precise mathematical model. This means that Maude and its formal tool environment can be used in three, mutually reinforcing ways: (1) as a declarative programming language; (2) as an executable formal specification language; and (3) as a formal verification system. Maude’s rewriting logic is simple, yet very expressive. This gives Maude good representational capabilities as a semantic framework to formally repre- sent a wide range of systems, including models of concurrency, distributed al- gorithms, network protocols, semantics of programming languages, and mod- els of cell biology. Rewriting logic is also an expressive universal logic, making Maude a flexible logical framework in which many different logics and in- ference systems can be represented and mechanized. This makes Maude a useful metatool to build many other tools, including those in its own formal tool environment. Thanks to the logic’s simplicity and the use of advanced semi-compilation techniques, Maude has a high-performance implementation, making it competitive with other declarative programming languages. The introduction (Chapter 1) gives a high-level overview of Maude’s main concepts and underlying philosophy, and of its various applications. Since this book gives a very complete account of Maude in its various aspects, Section 1.7 gives specific suggestions on different reading “paths” within the book, that can be chosen depending on the degree of prior familiarity with the main ideas and the various uses intended, for example, as a programming language or as a formal specification and verification tool.
Keywords: rewriting logic, Maude, executable specification, declarative language
This paper presents a study of possible extensions of Pathway Logic to represent and reason about semiquantitative and probabilistic aspects of biological processes. The underlying theme is the annotation of reaction rules with affinity information that can be used in different simulation strategies. Several such strategies were implemented, and experiments carried out to test feasibility, and to compare results of different approaches. Dimerization in the ErbB signalling network, important in cancer biology, was used as a test case.
Keywords: Pathway Logic, EgfR signaling, semiquantitative
Neuroscience is currently experiencing explosive growth in detailed high-quality experimental information on neural processes underlying learning, memory and behavior. Correspondingly there has arisen a need for computational models that can manage this outpouring of information, help convert information to knowledge, and to generate novel, testable hypotheses. In this paper we describe an application of Pathway Logic, using the rewriting-logic specification system Maude, to model a neural circuit involved in feeding in a marine mollusk. This approach is intended to augment existing modeling techniques in neuroscience and has potential advantages of scalability, robustness with regard to system parameters, and easy testing of knock-outs, ‘what-if’s and other in silico experimentation. Our technique yields expressive models capable of simulating known neural circuit behaviors.
Keywords: neural circuit, executable specification, Aplysia, Maude
A network of reactions is a commonly used paradigm for representing knowledge about a biological process. How does one understand such generic networks and answer queries using them? In this paper, we present a novel approach based on translation of generic reaction networks to Boolean weighted MaxSAT. The Boolean weighted MaxSAT instance is generated by encoding the equilibrium configurations of a reaction network by weighted boolean clauses. The important feature of this translation is that it uses reactions, rather than the species, as the boolean variables. Existing weighted MaxSAT solvers are used to solve the generated instances and find equilibrium configurations. This method of analyzing reaction networks is generic, flexible and scales to large models of reaction networks. We present a few case studies to validate our claims.
Keywords: weighted MaxSAT, reaction network, reaction constraints
We present the Cognitive Radio (Policy) Language (CoRaL), a new language for expressing policies that govern the behavior of cognitive radios that opportunistically share spectrum. A Policy Reasoner validates radio transmissions to ensure that they are compliant with the spectrum policies. The Policy Reasoner also discovers spectrum sharing opportunities by deriving what requirements must be fulfilled for transmissions to be valid, i.e., in compliance with policies. A novel mix of reasoning techniques is required to implement such a reasoner.
Keywords: spectrum sharing, policy language, policy reasoner
We present a new language for expressing policies that allow opportunistic spectrum access while not causing interference. CoRaL has expressive constructs for numerical constraints, supports efficient reasoning, and will be verifiable. The language is extensible so that unanticipated policy types can be encoded. We also describe a Policy Reasoner that reasons about CoRaL policies, and show how this reasoner can be used with various cognitive radios (in this case, an XG radio) to guarantee policy-specified behaviors while allowing spectrum sharing.
Keywords: cognitive radio, policy-based, reasoning, spectrum-sharing
This paper describes the application of the Real-Time Maude tool and the Maude formal methodology to the specification and analysis of the AER/NCA suite of active network multicast protocol components. Because of the time-sensitive and resource-sensitive behavior, the presence of probabilistic algorithms, and the composability of its components, AER/NCA poses challenging new problems for its formal specification and analysis. Real-Time Maude is a natural extension of the Maude rewriting logic language and tool for the specification and analysis of real-time object-based distributed systems. It supports a wide spectrum of formal methods, including: executable specification; symbolic simulation; breadth-first search for failures of safety properties in infinite-state systems; and linear temporal logic model checking of time-bounded temporal logic formulas. These methods complement those offered by network simulators on the one hand, and timed-automaton-based tools and general-purpose theorem provers on the other. Our experience shows that Real-Time Maude is well-suited to meet the AER/NCA modeling challenges, and that its methods have proved effective in uncovering subtle and important errors in the informal use case specification.
Keywords: Formal analysis, Real-time systems, Object-oriented specification, Rewriting logic, Active networks, Multicast protocols
Soft constraints extend classical constraints to deal with non-functional requirements, over-constrained problems and preferences. Bistarelli, Montanari and Rossi have developed a very elegant and abstract semiring based theory of soft constraints where many different kinds of soft constraints can be represented and combined in a uniform way over so-called constraint semirings. In this paper we present a framework for prototyping of soft constraints a la Bistarelli, Montanari and Rossi in Rewriting Logic. As a case study we present an application of soft constraints to the new area of software-defined radio networks. We model the problem of “optimal” parameter assignments for software-defined radios as a soft constraint solving problem, prove the correctness of the constraint solving algorithm, implement the solution in our prototypical Rewriting Logic framework for soft constraints, and embed our soft constraint solver in SRI’s Policy-Aware, Goal-Oriented Distributed Architecture (PAGODA) for modelling radio networks.
Keywords: Rewriting logic, soft constraint, software-defined radio
This paper describes representations of biological processes based on Rewriting Logic and Petri net formalisms and mappings between these representations used in the Pathway Logic Assistant. The mappings are shown to preserve properties of interest. In addition a relevant subnet transformation is defined, that specializes a Petri net model to a specific query to reduce the number of transitions that must be considered when answering the query. The transformation is shown to preserve the query in the sense that no answers are lost.
Keywords: signal transduction, biological process, Pathway Logic, Rewriting Logic, Petri Net
PAGODA (Policy And GOal Based Distributed Autonomy) is a modular architecture for specifying and prototyping autonomous systems. A PAGODA node (agent) interacts with its environment by sensing and affecting, driven by goals to achieve and constrained by policies. A PAGODA system is a collection of PAGODA nodes cooperating to achieve some mutual goal. This paper describes a specification of PAGODA using the Russian Dolls model of policy-based coordination. In PAGODA there are two forms of coordination: local and global. Local coordination is used to compose the components of a PAGODA node. The local coordinator is concerned with ensuring component level synchronization constraints, cross component message ordering constraints, routing of notifications, and interaction with the external world. The global coordinator is concerned with dissemination of information, negotiation of responsibilities, and synchronization of activities. Requirements for a PAGODA node coordinator are given and an example set of policies is specified. Principles for showing that the policies satisfy the requirements are discussed as a first step toward a logic of policy-based coordination. Development of a distributed coordinator is the subject of ongoing work. Some challenges and possible solutions are discussed.
Keywords: coordination, composition, distributed object reflection, goal, autonomy,policy
Keywords: cognitive networking, policy-based, reasoning
Keywords: coordination, composition, distributed object reflection, goal, autonomy,policy
Pathway Logic is a step towards a vision of symbolic systems biology. It is an approach to modeling cellular processes based on formal methods. In particular, formal executable models of processes such as signal transduction, metabolic pathways, and immune system cell-cell signaling are developed using the rewriting logic language Maude and a variety of formal tools are used to query these models. An important objective of Pathway logic is to reflect the ways that biologists think about problems using informal models, and to provide bench biologists with tools for computing with and analyzing these models that are natural. In this paper we describe the Pathway Logic approach to the modeling and analysis of signal transduction, and the use of the Pathway Logic Assistant tool to browse and query these models. The Rac1 signaling pathway is used to illustrate the concepts.
Keywords: Pathway Logic, signal transduction, symbolic, formal executable model
The Pathalyzer is a program for visualizing and analyzing large-scale signal transduction networks. Reactions and their substrates and products are represented as transitions and places in a safe Petri net. The user can interactively specify goal states, such as activation of a particular protein in a particular cell site, and the system will automatically find and display a pathway that results in the goal state – if possible. The user can also require that the pathway be generated without using certian proteins. The system can also find all individual places and all pairs of places which, if knocked out, would prevent the goals from being achieved. The tool is intended to be used by biologists with no significant understanding of Petri nets or any of the other concepts used in the implementation.
Keywords: signal transduction, Petri net, visualization
Global computing involves the interplay of a vast variety of languages, but practially useful foundations for language specification and prototyping at the semantic level are lacking. In this paper we present a systematic approach consisting of three techniques: 1. A generic calculus of explicit substitutions with names (called CINNI) that allows us give a first-order representation of syntax to uniformly deal with all binding aspects. 2. An executable representation of Felleisen-style operational semantics in terms of first-order rewrite rules. 3. A logical framework, namely rewriting logic, that allows us to express (1) and (2) and, in addition, language aspects such as concurrency and non-determinism. We illustrate the use of these techniques in two applications: 1. A formal specification and analysis of PLAN, a Packet Language for Active Networks, that has been developed in the Switchware project at UPenn. This work was conducted in the scope of the DARPA Active Network Program. 2. The development of CIAO, a Calculus of Imperative Active Objects, a core language for concurrent object-oriented programming. It is especially designed to allow the representation of practically relevant sublanguages of common object-oriented languages such as Java, C#, and C++. This second application is subject of ongoing work.
Keywords: CINNI, calculus of substitutions, programming language, formal executabble semantics, rewriting logic
Protein functional domains (PFDs) are consensus sequences within signaling molecules that recognize and assemble other signaling components into complexes. Here we describe the application of an approach called Pathway Logic to the symbolic modeling signal transduction networks at the level of PFDs. These models are developed using Maude, a symbolic language founded on rewriting logic. Models can be queried (analyzed) using the execution, search and model-checking tools of Maude. We show how signal transduction processes can be modeled using Maude at very different levels of abstraction involving either an overall state of a protein or its PFDs and their interactions. The key insight for the latter is our algebraic representation of binding interactions as a graph.
Remote agents used in Deep Space Missions such as rovers or solar airplanes must function autonomously over a prolonged time during planetary exploration. The Mission Data System (MDS) framework has been developed to address design and deployment of these complex systems. We are using the Maude environment to develop a formal framework with methods and supporting tools for increasing the dependability of MDS space systems. This is done by developing formal executable specifications of the MDS framework and its mission-specific adaptations and providing a set of formal checklists (formal analysis suites) that can be used to achieve better predictability and dependability. In this paper we present our formal model of the MDS framework, an adaptation for a remote rover and preliminary checklists for remote agents.
Keywords: Rewriting logic, goal-oriented, model-based, formal checklist
We describe a platform, IOP, for the interoperation of formal reasoning tools, and an adaptation of Maude, IMaude, that utilizes this platform. Three applications of IMaude and IOP to real world problem domains are described.
Keywords: Rewriting logic, interoperability, visualization.
Specification Diagrams (SD) are a graphical notation for specifying the message passing behavior of open distributed object systems. SDs facilitate specification of system behaviors at various levels of abstraction, ranging from high-level specifications to concrete diagrams with low-level implementation details. We investigate the theory of may testing equivalence on SDs, which is a notion of process equivalence that is useful for relating diagrams at different levels of abstraction. We present a semantic characterization of the may equivalence on SDs which provides a powerful technique to relate abstract specifications and refined implementations. We also describe our prototypical implementation of SDs and of a procedure that exploits the characterization of may testing to establish equivalences between finitary diagrams (without recursion).
Keywords: Graphical specification languages, may testing, trace equivalence, rewriting logic
In this paper we present a meta-architectural framework for customizable QoS-based middleware based on the actor model of concurrent active objects. Using TLAM, a semantic model for specifying and reasoning about components of open distributed systems, we show how a QoS brokerage service can be used to coordinate multimedia resource management services in a safe, flexible and efficient manner. In particular, we show that a system in which the multimedia actor behaviors satisfy the specified requirements, provides the required multimedia service. The behavior specification leaves open the possibility of a variety of algorithms for resource management as well as adding additional resource management activities by providing constraints to ensure their non-interference.
Keywords: meta-object models, distributed systems, theoretical foundations, object-oriented applications, multimedia
Traditionally, adaptability in communication frameworks has been restricted to predefined choices without taking into consideration tradeoffs between them and the application requirements. Furthermore, different applications with an entire spectrum of requirements will have to adapt to these predefined choices instead of tailoring the communication framework to fit their needs. In this paper we extend an executable specification of a state-of-the-art secure group communication subsystem to explore two dimensions of adaptability, namely security and synchrony. In particular, we relax the traditional requirement of virtual synchrony (a well-known bottleneck) and propose various generic optimizations, while preserving essential security guarantees.
Systems are becoming exceedingly complex to manage. As such, there is an increasing trend towards developing systems that are self-managing. Policy-based infrastructures have been used to provide a limited degree of automation, by associating actions to system-events. In the context of self-managing systems, the existing policy-specification model fails to capture the following: a) The impact of a rule on system behavior (behavior implication). This is required for automated decision-making. b) Learning mechanisms for refining the invocation heuristics by monitoring the impact of rules. This paper proposes ‘Eos’, an approach to enhance the existing policy-based model with behavior implications. The paper gives details of the following aspects: 1) expressing behavior implications; 2) using behavior implications of a rule for learning and automated decision-making; 3) enhancing existing policy-based infrastructures to support self-management using Eos. The paper also describes an example of using Eos for self-management within a distributed file-system.
We discuss motivation and design ideas for a formal semantic framework with tool support for specifying, analysing and reasoning about secure agents and secure agent architectures. Within this framework we will be able to define executable models of agent systems, hostile environments, and mechanisms for control, detection, and protection. These models will serve as a basis for search and model-checking analyses, definition of abstraction mappings, and rigorous proofs of general properties. We have in mind not only the standard security concerns of authentication, authorization, integrity, and confidentiality considered in this more complex setting, but also concerns such as privacy, stealth, and information flow and agregation. An objective of the secure agent framework should facilitate identification of vulnerabilities and assumptions at different levels of abstraction of system design.
We propose a formal model for reputation-based trust management. In contrast to conventional, credential-based trust management, in reputation-based trust management an agent’s past behavior is used to decide whether to grant him access to a protected resource. We use an event semantics inspired by the actor model, and assume that each principal has only partial knowledge of the events that have occurred. Licenses are used to formalize restrictions on agents’ behavior. If an agent fulfills the obligations imposed by a license and does not misuse the license by performing a forbidden action, then his reputation improves. This approach enables precise formal modeling of scenarios involving reputations, such as financial transactions based on credit histories and information sharing between untrusted agents.
This paper gives an overview of the Maude 2.0 system. We emphasize the full generality with which rewriting logic and membership equational logic are supported, operational semantics issues, the new built-in modules, the more general Full Maude module algebra, the new META-LEVEL module, the LTL model checker, and new implementation techniques yielding substantial performance improvements in rewriting modulo. We also comment on Maude’s formal tool environment and on applications.
This paper presents a logic based on Feferman’s Variable Type Theories for reasoning about programs written in a class of imperative functional languages called Landinesque languages.
A generic formal model of distributed object reflection is proposed, that combines logical reflection with a structuring of distributed objects as nested configurations of metaobject that can control subobjects under them. The model provides mathematical models for a good number of existing models of distributed reflection and of reflective middleware. To illustrate the ideas, we show in some detail how two important models of distributed actor reflection can be naturally obtained as special cases of our generic model, and discuss how several recent models of reflective middleware can be likewise formalized as instances of our model.
PLAN is a language designed for programming active networks, and can more generally be regarded as a model of mobile computation. PLAN generalizes the paradigm of imperative functional programming in an elegant way that allows for potentially recursive, remote function calls, and it provides a clear mechanism for the interaction between host and mobile code. Techniques for specifying and reasoning about such languages are of growing importance. In this paper we describe our specification of PLAN in Maude. We show how techniques for developing mathematical semantics of imperative functional programs (syntax-based semantics) and for formalizing variable binding constructs and mobile environments (CINNI calculus) are used in combination with the natural representation of concurrency and distribution provided by rewriting logic to develop a faithful model of the informal PLAN semantics. We also illustrate the wide-spectrum approach to formal modeling supported by Maude: executing PLAN programs; analyzing PLAN programs using search and model-checking; proving properties of particular PLAN programs; and proving general properties of the PLAN language. The ability to execute and analyze particular PLAN programs is useful for PLAN programmer, while analysis of the specification itself is important as a means of validating both the language design and the formal specification.
The actor theory framework is a general semantic framework, based on the actor computation model, for specifying and reasoning about components of open distributed systems. It can be used to define both operational and trace-like interaction semantics for actor programming languages and specification notations. It can also be used to directly specify actor system components. The framework allows descriptions of system components written using different notations or at different levels of abstraction to be related: translations between actor programming languages can be shown to preserve interaction semantics, notions of satisfaction and refinement for specification notations can be defined and, using actor theory transformations, different descriptions of a component can be shown to be equivalent. In this paper we define the notion of an equationally presented actor theory and a mapping of equational presentations to theories in rewriting logic. We show that this mapping gives a correct representation of actor theory semantics by defining a correspondence between finite actor theory computations and rewrite theory proofs. To treat infinite computations and admissibility we extend the rewriting logic initial model construction to include infinite proofs, extend the mapping to infinite computations, and show that the correspondence preserves interaction semantics.
Keywords: actor theory, interaction semantics, rewriting logic, composability
Modeling and formally analyzing active network systems and protocols is quite challenging, due to their highly dynamic nature and the need for new network models. We propose a wide-spectrum methodology using executable rewriting logic specifications to address this challenge. We also show how, using the Maude rewriting logic language and tools, active network systems, languages, and protocols can be formally specified and analyzed using a wide range of formal methods. Benefits include precise documentation of designs, early discovery of many bugs and omissions, and higher assurance of correct behavior. In this paper we illustrate these methods and their practical usefulness through two case studies: the AER/NCA protocol suite and the PLAN active networks language.
Specification diagrams (SDs) are a novel form of graphical notation for specifying open distributed object systems. The design goal is to define notation for specifying message-passing behavior that is expressive, intuitively understandable, and that has formal semantic underpinnings. The notation generalizes informal notations such as UML’s Sequence Diagrams and broadens their applicability to later in the design cycle. Specification diagrams differ from existing actor and process algebra presentations in that they are not executable per se; instead, like logics, they are inherently more biased toward specification. In this paper we rigorously define the language syntax and semantics and give examples that show the expressiveness of the language, how properties of specifications may be asserted diagrammatically, and how it is possible to reason rigorously and modularly about specification diagrams.
In this paper we describe the use of the rewriting logic based Maude tool to model and analyze mammalian signaling pathways. We discuss the representation of the underlying biological concepts and events and describe the use of the new search and model checking capabilities of Maude 2.0 to analyze the modeled network. We also discuss the use of Maude’s reflective capability for metamodeling and analyzing the models themselves. The idea of symbolic biological experiments opens up an exciting new world of challening applications for formal methods in general and for rewriting logic based formalisms in particular.
In a distributed reflective framework, issues of correctness and composition can be quite subtle and complex. Interactions within and across reflective levels must be considered, the semantics of shared, distributed resources must be clearly spelled out, and new notions of correctness of the overall system need to be developed that account for the dynamic, distributed, and reflective setting. TLAM is a two-level model of distributed computation based on the actor model of object-based distributed computation that supports dynamic customizability and separation of concerns in designing and reasoning about components of Open Distributed Systems. The TLAM uses reification (base object state as data at the meta object level) and reflection (modification of base object state by meta objects) with support for implicit invocation of meta objects in response to changes of base level state. This provides for debugging, monitoring, and other hooks. In this paper we briefly review the TLAM concepts and summarize the application of the TLAM framework to fairly elaborate middleware services. The main contribution of this paper is a simple example worked out in some detail to illustrate techniques for using the TLAM to model middleware services as reflective / meta-level services and to illustrate our multiple viewpoint methodology for specifying and reasoning about such services.
Keywords: open distributed system, reflection, middleware, meta-architecture, specification, multiple viewpoints, verification
Systems that provide QoS-enabled services such as multimedia are subject to constant evolution - customizable middleware is required to effectively manage this change. Middleware services for resource management such as scheduling, protocols providing security and reliability, load balancing and stream synchronization, execute concurrently with each other and with application activities and can therefore potentially interfere with each other. To ensure cost-effective QoS in distributed systems, safe composability of resource management services is essential. In this paper we present a meta-architectural framework for customizable QoS-based middleware based on the actor model of concurrent active objects. Using TLAM, a semantic model for specifying and reasoning about components of open distributed systems, we show how a QoS brokerage service can be used to coordinate multimedia resource management services in a safe, flexible and efficient manner. In particular, we show that a system in which the multimedia actor behaviors satisfy the specified requirements, provides the required multimedia service. The behavior specification leaves open the possibility of a variety of algorithms for resource management as well as adding additional resource management activities by providing constraints to ensure their non-interference.
Keywords: meta-object models, distributed systems, theoretical foundations, object-oriented applications, multimedia
This paper describes the application of the Real-Time Maude tool and the Maude formal methodology to the specification and analysis of the AER/NCA suite of active network multicast protocol components. Because of the time-sensitive and resource-sensitive behavior and the composability of its components, AER/NCA poses challenging new problems for its formal specification and analysis. Real-Time Maude is a natural extension of the Maude rewriting logic language and tool for the specification and analysis of real-time object-based distributed systems. It supports a wide spectrum of formal methods, including: executable specification; symbolic simulation; and infinite-state model checking of temporal logic formulas. These methods complement those offered by finite-state model checkers and general-purpose theorem provers. Real-Time Maude has proved to be well-suited to meet the AER/NCA modeling challenges, and its methods have been effective in uncovering subtle and important errors in the informal use case specification.
Open distributed systems (ODS) exhibit a high degree of dynamicity and autonomy. ODS evolve dynamically and components of ODS interact with an environment that is not under their control. A wide range of protocols and activities must be composed to implement end-to-end distributed application management. These protocols and activities must execute concurrently, non-disruptively and share the same resources. In order to avoid resource conflicts, deadlocks, inconsistencies and incorrect execution semantics, the underlying resource management system must ensure that the simultaneous system activities compose in a correct manner. The difficulty in reasoning about system level interactions is due to the complexity of characterizing the semantics of shared resources and specifying what correctness of the overall system means. In this paper, we develop a mathematical framework and formal mechanisms for reasoning about the interaction and composition of resource management activities in open distributed systems, their dynamic installation and modification. In particular, we develop a two-level meta-architectural model of distributed computation based on Actors, a model of concurrent objects. The utility of this model is illustrated by developing specification of resource management services – remote creation, migration and reachability snapshot, and reasoning about their composability.
Keywords: distributed computation, resource management, meta-architecture, composition, specification, verification
We present a set-based control flow analysis for an imperative, concurrent object calculus extending the Fisher-Honsell-Mitchell functional object-oriented calculus. The analysis is shown to be sound with respect to a transition system semantics.
Keywords: concurrent object-oriented, control-flow analysis, soundness, prototype-based
Rewriting logic and the Maude language make possible a new methodology in which formal modeling and analysis can be used from the earliest phases of system design to uncover many errors and inconsistencies, and to reach high assurance for critical components. Our methodology is arranged as a sequence of increasingly stronger methods, including formal modeling, executable specification, model-checking analysis, narrowing, and formal proof, some of which can be used selectively according to the specific needs of each application. The paper reports on a number of experiments and case studies applying this formal methodology to active networks, communication protocols, and security protocols.
In this paper we present Clara, a new programming language for high performance distributed computing. Clara has been developed to embody in an efficient distributed computing environment the conceptual clarity of the actor model, an object-based framework for the design and implementation of open distributed systems. We describe our Clara compiler, paying special attention not only to the adopted two stage translation process, but also to the runtime environment, which is based on the message passing interface standard.
In this paper we describe a set of relations and transformation on actor theories that form the beginnings of a toolkit for establishing relationships between different descriptions of actor system components.
Open Distributed Systems (ODS) evolve dynamically and components of ODS interact with an environment that is not under their control. Managing change efficiently is critical to the effective deployment of applications executing in such environments; reflection helps deal with the need for flexibility. A wide range of protocols and activities must execute concurrently and non-disruptively, and must share resources. In order to avoid resource conflicts, deadlocks, inconsistencies and incorrect execution semantics, the underlying resource management system must ensure that the concurrent system activities compose in a correct manner. In this position paper, we discuss our experiences in developing system level behaviors in the context of ComposeQ, a QoS-enabled composable middleware framework and outline key challenges in the development of formally verifiable middleware for next generation highly distributed and customizable applications.
Reasoning theories can be used to specify heterogeneous reasoning systems. In this paper we present an equational version of reasoning theories, and we study their structuring and composition, and the use of annotated assertions for the control of search, as mappings between reasoning theories. We define composability and composition using the notion of \emphfaithful inclusion mapping, we define \emphannotated reasoning theories using the notion of \empherasing mapping, and we lift composability and composition to consider also annotations. As an example, we give a modular specification of the top-level control (known as waterfall) of NQTHM, the Boyer-Moore theorem prover.
Communication between distributed objects may have to be protected against random failures and malicious attacks; also, communication timeliness may be essential or highly desired. Therefore, a distributed application often has to be extended with communication services providing some kind of fault-tolerance, secrecy, or quality-of-service guarantees. Ideally, such services should be defined in a highly modular and dynamically composable way, so that the combined assurance of several services can be achieved by composition in certain cases, and so that services can be added or removed from applications at runtime in response to changes in the environment. To reason about the formal properties of such composable communication services one first needs to give them a precise semantics. This paper proposes a rewriting logic semantics for the so-called “onion skin” model of distributed object reflection, in which different meta-objects, providing different communication services, can be stacked on top of a basic application object. Since the correct behavior of a service depends on the type of hostile environment against which the service must protect the application, rewriting logic should also be used to specify such hostile environments. The service guarantees are then guarantees about the behavior specified by the union of the rewrite theories specifying the basic application, the services, and the hostile environment.
On the one hand network and communication protocols are complex and difficult to design, on the other hand it is important that network systems are robust and reliable. Thus it is desirable to have formal models augmented with tools that support simulation and testing that can be used by designers of new protocols, both in the early stages of design, as well as in later stages, where more rigorous formal analysis is important. In this paper we present the specification of a network model in the rewriting logic language Maude along with some primitives for defining simulation strategies. The use of the model is illustrated with a simple HELLO sub-protocol taken from the IETF PIM-DM (Protocol Independent Multi-Cast-Dense Mode) The network model we present reflects the key aspects of the infra-structure on which typical communication protocols run. The model is designed so that we may execute isolated protocols as well as develop techniques for composing sub-protocols, to model the more complex protocols used in practice. The long term goal is to support simulation and formal analysis at many levels of detail.
In this paper we present two actor languages and a semantics preserving translation between them. The source of the translation is a high-level language that provides object-based programming abstractions. The target is a simple functional language extended with basic primitives for actor computation. The semantics preserved is the interaction semantics of actor systems—sets of possible interactions of a system with its environment. The proof itself is of interest since it demonstrates a methodology based on the actor theory framework for reasoning about correctness of transformations and translations of actor programs and languages and more generally of concurrent object languages.
Keywords: actor language, interaction semantics, abstract actor structure, translation, correctness
The increasing importance, criticality, and complexity of communications software makes very desirable the application of formal methods to gain high assurance about its correctness. These needs are even greater in the context of active networks, because the difficulties involved in ensuring critical properties such as security and safety for dynamically adaptive software are substantially higher than for more static software approaches. There are many obstacles to the insertion of formal methods in this area, and yet there is a real need to find adequate ways to increase the quality and reliability of critical communication systems. The present work reports on an ongoing case stufy in which a new reliable broadcasting protocol (RBP) currently under development at the University of California at Santa Cruz (UCSC) has been formally specified and analyzed, leading to many corrections and improvements to the original desigh. Reliable broadcasting is not trivial when the topology of the network can change due to failure and mobility. The aim is to ensure that all nodes that satisfy certain connectedness criteria receive the information within finite time, and that the source is notified about it. The protocol should furthermore incur as low latency and as few messages as possible. Indeed, the process of formally specifying the protocol in Maude, and of symbolically executing and formally analysing the resulting specification, has revealed many bugs and inconsistencies very early in the design proecss, before much effort was put into the implementation of the protocol.
Specification diagrams are a novel form of graphical notation for specifying open distributed object systems. The design goal is to define notation for specifying message-passing behavior that is expressive, intuitively understandable, and that has formal semantic underpinnings. The notation generalizes informal notations such as UML’s Sequence Diagrams and broadens their applicability to later in the design cycle. \inlong Specification diagrams differ from existing actor and process algebra presentations in that they are not a programming language \emphper se; instead, like logics, they are inherently more biased toward specification. In this paper we show how it is possible to reason rigorously and modularly about specification diagrams. An Actor Theory Toolkit is used to great advantage for this purpose.
This paper presents a formal and executable specification of a new protocol for reliable broadcasting of information in networks with dynamic topology. Reliable broadcasting is not trivial when the topology of the network can change due to failure and mobility. The aim is to ensure that all nodes that satisfy certain connectedness criteria receive the information within finite time, and that the source is notified about it. The protocol should furthermore incur as low latency and as few messages as possible. The protocol is specified in rewriting logic. Rewriting logic is a logic which extends algebraic specification techniques to concurrent and reactive systems. Among its possible advantages over other executable specification formalisms are its being based upon a natural and well-known formalism, its natural integration of static and dynamic system aspects, the abstract modeling of communication, and its possibility to define execution strategies in the logic itself. Rewriting logic seems particularly suitable for specifying security and communication protocols. Such protocols are complex enough to warrant prototyping and their operational nature fits very well with rewriting logic. Apart from presenting the final network protocol, this report also aims at giving some snapshots of the specification effort of the group illustrating how the rewriting logic language Maude can be used in the different stages of the specification and validation process. Due to the complexity of the protocol, we first focussed on specifying the protocol for the subcase of static networks, i.e., networks where no channels or nodes are lost or created. The protocol for the dynamic network setting was then developed, based of the final specification for the static setting.
The increasing importance and ubiquity of distributed and mobile object systems makes it very desirable to develop rigorous semantic models and formal reasoning techniques to ensure their correctness. The concurrency model of rewriting logic has been extensively used by a number of authors to specify, execute, and validate concurrent object systems. This model is a true concurrency model, associating an algebra of proof terms RP to the rewrite theory R specifying the desired system. The elements of PR are concurrent computations described as proofs modulo an equational theory of proof/computation equivalence. This paper builds a very intuitive alternate model RE, also of a true concurrency nature, but based instead on the notion of concurrent events and a causality partial order between such events. The main result of the paper is the equivalence of these two models expressed as an isomorphism. Both models have straightforward extensions to similar models of infinite computations, in which infinite behavior properties such as fairness can be naturally expressed. The models are very general and can express both synchronous and asynchronous object computations. In the asynchronous case the Baker-Hewitt event model for actors appears as a special case of our model.
In distributed computing and in communications software there is a great interest in modular and composable approaches. In particular, services such as security or fault-tolerance and services intended for boosting performance that may in practice be “hard-wired” across different parts of the code of a distributed system should be treated in a much more modular way, so that they can be dynamically added to a system, and so that several such services can be composed to obtain their combined benefits. Besides the good software engineering reasons of thus making the systems much more easily reusable and adaptable, modularity and composability are also crucial at runtime, in the sense that some of these services, that may have a cost in performance, should not be used always and everywhere. Instead, they should be installed dynamically and selectively at runtime, in those areas or domains of the distributed system where they are needed in response to some security threat, failure, need for boosting performance, and so on. This paper proposes a semantic approach to make precise the reflective concept of composable service in a distributed system, and to reason about the properties of service compositions. Our approach is based on the executable formal semantics for distributed object-oriented systems provided by rewriting logic and explicitly addresses the reflective properties that are essential for having a truly modular notion of service. The formal model that we present seems promising not only for formal analysis and symbolic simulation, but also for the application of theorem proving methods to formally verify important properties of services and their compositions. Our model builds on the onion skin model of actor reflection developed by Agha and his collaborators. We illustrate the basic ideas with a case study treating communication services – fault-tolerance, encryption and authentication services – and their composition. Formal specification of service guarantees and combination of guarantees for composed services are also discussed.
This paper proposes rewriting logic as an executable specification formalism for security protocols that offers some novel advantages. A message-passing object-oriented approach seems particularly natural for communication protocols and can be naturally formalized in rewriting logic. This is illustrated by using the Needham-Schroeder Public-Key protocol as a running example. The rewriting logic-based Maude interpreter \citeClaEkeLin96 offers also some useful advantages. Efficient executability allows prototyping and debugging of protocol specifications. But since a concurrent system can have many different behaviors, to properly \em analyze the system it becomes important to explore not just the single execution provided by some default strategy, but many other executions. Maude supports user-defined execution strategies, including strategies such as breadth-first-search that can exhaustively explore all the executions of a system. This is very helpful in uncovering security flaws under unforeseen attack scenarios such as those found for NSPK. We also discuss future developments along of this work, including (1) narrowing using symbolic execution techniques, (2) modularity and compositionality issues in formal reasoning, and (3) combination of rewriting logic and temporal logic.
Many formalisms for reasoning about knowing commit an agent to be logically omniscient. Logical omniscience is an unrealistic principle for us to use to build a real-world agent, since it commits the agent to knowing infinitely many things. A number of formalizations of knowledge have been developed that don’t ascribe logical omniscience to agents. With few exceptions, these approaches are modifications of the “possible-worlds” semantics. In this paper we use a combination of several general techniques for building non-omniscient reasoners. First we provide, by using reification, for the explicit representation of notions such as problems, solutions, and problem solving activities, notions which are usually left implicit in the discussions of autonomous agents. A second technique is to take explicitly into account the notion of \it resource\/ when we formalize reasoning principles. We use the notion of resource to describe interesting principles of reasoning that are used for ascribing knowledge to agents. For us, resources are abstract objects. We make extensive use of ordering and inaccessibility relations on resources, but we do not find it necessary to define a metric on resources. Using principles about resources without assigning a metric is one of the strengths of our approach. We describe the architecture of a reasoner, built from a finite number of components, who solves a puzzle involving reasoning about knowing by explicitly using the notion of resource. Our approach allows the use of axioms ordinarily used in problem solving –such as axiom \bf K of modal logic about belief– without being forced to attribute logical omniscience to any agent. In particular we address the issue of how we can attribute knowledge to others using resource-unbounded (e.g., logically omniscient) reasoning without introducing contradictions. We do this by showing how such ideas can be introduced as a \it conservative extension\/ over resource-bounded reasoning.
Keywords: FOL context, resource limited reasoning, omniscience, attributing knowledge
We are interested in developing a semantic foundation that supports specifying, composing, and reasoning about components of open distributed systems. The actor model provides the basic elements for open distributed computation: encapsulation of state; independent concurrent units of computation; interaction; and dynamic creation and interconnection. In order to provide for composability, and for reasoning about properties at many different levels of abstraction we introduce the notions of actor component, actor component algebra, and actor component algebra morphism. Morphisms from syntactic to semantic algebras give composable semantics. We first illustrate features of the actor model and a variety of component algebras and morphisms through a series of simple examples. We then define three semantic models for actor computation starting with a generalization to open systems of Clinger’s event diagram model, and forming two abstractions: interaction diagrams and interaction paths. An algebra is defined on each semantic domain with operations for parallel composition, hiding of internal actors, and renaming, and we show that component algebra laws hold. We use these models to provide semantics for descriptions of actor components based on actor theories and show that the semantics is a component algebra homomorphism.
Keywords: actors, composable semantics, event diagrams, interaction diagrams interaction semantics, component algebra
We present a set-based control flow analysis for an imperative, concurrent object calculus extending the Fisher-Honsell-Mitchell functional object-oriented calculus. The analysis is shown to be sound with respect to a transition system semantics.
Keywords: concurrent object-oriented, control-flow analysis, soundness, prototype-based
We define three semantic models for actor computation starting with a generalization to open systems of Clinger’s event diagram model, and forming two abstractions: interaction diagrams and interaction paths. An algebra is defined on each semantic domain with operations for parallel composition, hiding of internal actors, and renaming. We use these models to provide semantics for descriptions of actor components based on actor theories and show that the semantics is a component algebra homomorphism.
Keywords: actors, composable semantics, event diagrams, interaction diagrams interaction semantics, component algebra
We present an actor language which is an extension of a simple functional language, and provide an operational semantics for this extension. Actor configurations represent open distributed systems, by which we mean that the specification of an actor system explicitly takes into account the interface with external components. We study the composability of such systems. We define and study various notions of testing equivalence on actor expressions and configurations. The model we develop provides fairness. An important result is that the three forms of equivalence, namely, convex, must, and may equivalences, collapse to two in the presence of fairness. We further develop methods for proving laws of equivalence and provide example proofs to illustrate our methodology.
Our ultimate goal is to provide a framework and a methodology which will allow users to construct and extend complex reasoning systems by composing existing modules in a “plug and play” manner. To this end in a previous we have defined a general architecture for a class of reasoning modules and systems called Open Mechanized Reasoning Systems (OMRSs). An OMRS has three components: a logic component which consists of the assertions manipulated by the OMRS and the elementary deductions upon them; a control component which consists of the inference strategies of the OMRS; an interaction component which provides the OMRS with the capability of interacting with other systems, including OMRSs and human users. We have already developed a theoretical framework underlying the logic component of this architecture. In this paper we show how this formalism can be used in practice to specify the logic component of the simplification process of NQTHM, the Boyer-NMoore prover.
Keywords: reasoning system, OMRS, composing, NQTHM
In this paper we present two actor languages: a high level User Language and a low level Kernel Language. The user language is object-oriented. Actors qua objects have methods and synchronization constraints which allow an object to control when a method can be invoked. The kernel language is an extension of a simple functional language with primitives for actor computation. Each of the languages is given a simple operational semantics via a labelled transition system, and an abstract interaction semantics derived from the computations of the transition system. The main result presented here is a translation from the user language to the kernel language and a proof that this mapping preserves the interaction semantics. This demonstrates that the basic actor primitives of the kernel are as expressive as those of the user language, although perhaps less convenient from a programmers point of view. The proof itself is of interest since it demonstrates a methodology based on abstract actor structures for proving correctness of transformations and translations of actor languages and more generally of concurrent object languages.
Keywords: actor language, interaction semantics, abstract actor structure, translation, correctness
Our ultimate goal is to provide a framework and a methodology which will allow users, and not only system developers, to construct complex reasoning systems by composing existing modules, or to add new modules to existing systems, in a “plug and play” manner. These modules and systems might be based on different logics; have different domain models; use different vocabularies and data structures; use different reasoning strategies; and have different interaction capabilities. This paper makes two main contributions towards our goal. First, it proposes a general architecture for a class of reasoning systems called Open Mechanized Reasoning Systems (OMRSs). An OMRS has three components: a reasoning theory component which is the counterpart of the logical notion of formal system, a control component which consists of a set of inference strategies, and an interaction component which provides an OMRS with the capability of interacting with other systems, including OMRSs and human users. Second, it develops the theory underlying the reasoning theory component. This development is motivated by an analysis of state of the art systems. The resulting theory is validated by using it to describe the integration of the linear arithmetic module into the simplification process of the Boyer-Moore system, NQTHM.
Keywords: mechanized reasoning system, logical service, integration, reasoning theory, reasoning structure, bridge rule
This paper presents a unified framework for reasoning about program equivalence in imperative functional languages such and Scheme, Lisp, ML A notion of uniform semantics for lambda-languages is defined, and general principles are developed for reasoning about program equivalence in any lambda language with uniform semantics. For such languages we have the combined benefits of reduction calculi (modular axiomatization), and operational equivalence (more equations). A lambda-language is a programming language obtained by augmenting the lambda calculus with primitive operations that include not only basic constants, branching, and algebraic operations such as arithmetic and pairing, but also operations that manipulate the computation state (store, continuation), and the environment (sending messages, creating processes). The operational semantics of a lambda-language is given by providing a reduction rule for each primitive operation. The semantics is said to be uniform if it satisfies two uniformity conditions spelled out in the paper. The key requirement is uniform computation which allows computation steps to be carried out on states with missing information. It has the property that computing commutes with filling in of missing information. A key result for lambda languages with uniform semantics is the CIU theorem (first presented by Mason and Talcott at ICALP89) which is a form of context lemma. Using CIU and other consequences of uniform semantics two general principles for proving program equivalence are established that capture computational intuitions: programs with equivalence reducts are equivalent; expressions placed in equivalent contexts yield equivalent programs. Simulation principles are also established for proving equivalence of imperative functions (aka lambdas) and objects (which have private store). The semantics of a Scheme-like language with algebraic, control, and memory primitives is deveoped as an illustration of the ideas.
Our ultimate goal is to provide a framework and a methodology which will allow users, and not only system developers, to construct complex systems by composing existing modules, or to add new modules to existing systems, in a “plug and play” manner. These modules and systems might be based on different logics; have different domain models; use different vocabularies and data structures; use different reasoning strategies; and have different interaction capabilities. This paper, which is a first small step towards our goal, makes two main contributions. First, it proposes a general architecture for a class of reasoning modules and systems called Open Mechanized Reasoning Systems(OMRSs). An OMRS has three components: a reasoning theory component which is the counterpart of the logical notion of formal system, a control component which consists of a set of inference strategies, and an interaction component which provides an OMRS with the capability of interacting with other systems, including OMRSs and human users. Second, it develops the theory underlying the reasoning theory component. This development is illustrated by an analysis of the Boyer-Moore system, NQTHM.
We present a semantic framework for actor systems based on rewriting logic. This framework accounts for fairness and provides a variety of semantics for actor system components that have good composability properties.
Keywords: actors, components, semantics, composition, rewriting logic
The actor model is a natural starting point for a semantic theory that treats both heterogeneity and modularity (encapsulation and composability) in distributed systems. This paper begins with some simple examples of actor systems that illustrate the essential features of actor based computation, and the interactions and combinations of components these systems. We then present a semantic theory that formalizes these notions. A notion of abstract actor structure (AAS) is introduced that characterizes the minimal semantic requirements for an actor language and allows for components to be defined using multiple languages. Next, a modular operational semantics is derived from the local rules for behavior of individual actors given in an AAS. This semantics accounts for both internal computation and interaction of a component with its environment. Next we abstract away from details of internal computation and define a notion of interaction semantics. This allows us to reason about equivalence of actor system components considered as black boxes. Finally, an algebra of components and their interaction semantics is defined. This provides a basis for reasoning about construction and transformation of components implemented as actor systems.
Keywords: actors, components, distributed systems, semantics, interaction, composition
This paper builds domain theoretic concepts upon an operational foundation. The basic operational theory consists of a single step reduction system from which an operational ordering and equivalence on programs are defined. The theory is then extended to include concepts from domain theory, including the notions of directed set, least upper bound, complete partial order, monotonicity, continuity, finite element, omega-algebraicity, full abstraction, and least fixed point properties. We conclude by using these concepts to construct a (strongly) fully abstract continuous model for our language. In addition we generalize a result of Milner and prove the uniqueness of such models.
Keywords: Operational semantics, domain theory, Milner’s theorem
VTLoE (Variable Type Logic of Effects) is a logic for reasoning about imperative functional programs inspired by the variable type systems of Feferman. The underlying programming language, lambda-mk, extends the call-by-value lambda calculus with primitives for arithmetic, pairing, branching, and reference cells (mutable data). In VTLoE one can reason about program equivalence and termination, input/output relations, program contexts, and inductively (and co-inductively) define data structures. In this paper we present a refinement of VTLoE. We then introduce a notion of object specification and establish formal principles for reasoning about object systems within VTLoE. Objects are self-contained entities with local state. The local state of an object can only be changed by action of that object in response to a message. In lambda-mk objects are represented as closures with mutable data bound to local variables. A semantic principle called simulation induction was introduced in our earlier work as a means of establishing equivalence relations between streams, object behaviors, and other potentially infinite structures. These are formulated in VTLoE using the class apparatus. The use of these principles is illustrated by validating a variety of basic tranformation rules.
Keywords: functional, imperative, Object, simulation induction, contextual assertion, VTLoE
We present a two-level model of distributed computation based on actors. This model is the basis for developing a semantic framework that supports dynamic customizability and separation of concerns in designing and reasoning about components of open distributed systems (ODS). In particular, we would like to be able to consider separately issues such as: functional behavior of an application; failure semantics and fault tolerance protocols; and resource management issues such as memory management, load balancing, and scheduling. In this paper we consider remote creation, migration, and reachability snapshot services: their specification at different levels of abstraction, and their composition.
In this paper we introduce a variable typed logic of effects inspired by the variable type systems of Feferman for purely functional languages. VTLoE (Variable Typed Logic of Effects) is introduced in two stages. The first stage is the first-order theory of individuals built on assertions of equality (operational equivalence à la Plotkin), and contextual assertions. The second stage extends the logic to include classes and class membership. The logic we present provides an expressive language for defining and studying properties of programs including program equivalences, in a uniform framework. The logic combines the features and benefits of equational calculi as well as program and specification logics. In addition to the usual first-order formula constructions, we add contextual assertions. Contextual assertions generalize Hoare’s triples in that they can be nested, used as assumptions, and their free variables may be quantified. They are similar in spirit to program modalities in dynamic logic. We use the logic to establish the validity of the Meyer Sieber examples in an operational setting. The theory allows for the construction of inductively defined sets and derivation of the corresponding induction principles. We hope that classes may serve as a starting point for studying semantic notions of type. Naive attempts to represent ML types as classes fail in sense that ML inference rules are not valid.
Keywords: functional, imperative, ML, Scheme, program equivalence, contextual assertion, induction principles, types vs classes, VTLoE
This talk gives a rather personal perspective in which I will consider reasoning about programs exhibiting a spectrum of paradigms/facilities. I will begin with a brief summary of early work in which methods were developed to reason about programs that could be modelled as first order functions, or as state transformers. Then I will trace a lambda thread, following the view of Landin that programming languages are lambda expressions plus operators providing desired computational facilities (lambda – the ultimate integrating mechanism).
Keywords: logical context, handle, congruence, facts, rules, agents, rewriting
In this paper we describe progress towards a theory of tranformational program development. The transformation rules are based on a theory of contextual equivalence for functional languages with imperative features. Such notions of equivalence are fundamental for the process of program specification, derivation, transformation, refinement and other forms of code generation and optimization. This paper is dedicated to Professor Satoru Takasu.
Keywords: functional, imperative, ML, Scheme, program equivalence, program transformation, contextual assertion, VTLoE
Our ultimate goal is to provide a framework and a methodology which will allow users, and not only system developers, to construct complex reasoning systems by composing existing modules, or to add new modules to existing systems, in a “plug and play” manner. These modules and systems might be based on different logics; have different domain models; use different vocabularies and data structures; use different reasoning strategies; and have different interaction capabilities. This paper makes two main contributions towards our goal. First, it proposes a general architecture for a class of reasoning modules and systems called Open Mechanized Reasoning Systems (OMRS). An OMRS has three components: a reasoning theory component which is the counterpart of the logical notion of formal system, a control component which consists of a set of inference strategies, and an interaction component which provides an OMRS with the capability of interacting with other systems, including OMRS and human users. Second, it develops the theory underlying the reasoning theory component. This development is motivated by an analysis of state of the art systems. The resulting theory is then validated by using it to describe the integration of the linear arithmetic module into the simplification process of the Boyer-Moore system, NQTHM.
Keywords: mechanized reasoning system, logical service, integration, sequent, rule, reasoning structure
It is becoming increasingly important to be able to build survivable software systems. By survivable we mean having the qualities needed for a system to survive in, and to interact with an environment that is not under its control. Such systems need to be robust rather than brittle, and reactive rather than autistic. They may need to adapt to changes in their environment. In some cases it is important to be able to add new functionality to an existing system. Survivability depends on the ability of a system to represent its current behavior, to reason about this behavior, to accept and process new information and ideas, and to change its behavior. A software components behavior can be changed from without – brain surgery – or from within – conversation and contemplation. Although there will probably always be cases where brain surgery is required, it is important to move towards more principled, less intrusive, mechanisms of change. This leads us to the general question of what mathematical concepts and structures are needed to establish a firm mathematical foundation for building survivable software systems. In this abstract we consider the concept of reflection and its generalization to multi-model coupled systems.
Keywords: survivable, reflective, coupled multi-model systems
In this paper we use set theory to describe FOL systems. This restatement of the notions implicit in First Order Logic facilitates their interpretation as finite data structures. This is a crucial step toward the goal of building an artificial reasoner. This paper is dedicated to Professor Takasu for his dedication to science and his recognition that logic is important both for program correctness and theories of reasoning.
Keywords: first-order logic, FOL context, partial structure, FOL system, representation, reasoning
An important problem for automated reasoning is to develop mechanisms for integration of special purpose reasoning systems into general purpose reasoning systems, and into other systems. In this abstract we first point out some of the problems to be solved in order to have reasoning modules that can be effectively integrated, and suggest the notion of a logical service as a way of viewing integrable reasoning modules. We then present some preliminary results from ongoing work aimed at gaining a better understanding of this notion. Finally, we illustrate some of the ideas by describing a series of reasoners specializing in linear arithmetic using the concepts developed.
Keywords: logical service, reasoning modules, integrating, handle, linear arithmetic
This paper presents a two-level model of distributed computation designed to provide a formal basis for specifying and reasoning about dynamic system modification and resource management activities in a distributed system. We take actors as our underlying computation model. An important role of such a model is to represent interactions among resource management activities and between system level activities and application level activities. This allows us to express and reason about properties such as non-interference and composability. We illustrate the use of our model to specify resource management activities such as garbage collection, global distributed snapshots, dynamic insertion of dependability protocols and authentication mechanisms.
Keywords: distributed objects, actors, garbage collection, replication, security, distributed snapshots, acquaintances.
In this note we sketch our current ideas regarding an architecture for open reasoning systems. The architecture is based on three main components: (1) data structures and primitive operations; (2) mechanisms for encapsulating and sharing structures; and (3) mechanisms for control, communication, and interoperation.
In this paper we introduce a variable typed logic of effects (i.e. a logic of effects where classes can be defined and quantified over) inspired by the variable type systems of Feferman \citefeferman-85vt,feferman-90poly for purely functional languages. A similar extension incorporating non-local control operations was introduced in \citetalcott-90disco. The logic we present provides an expressive language for defining specifications and constraints and for studying properties and program equivalences, in a uniform framework. Thus it has an advantage over a number of systems in the literature that aim to capture solitary aspects of computation. The theory also allows for the construction of inductively defined sets and derivation of the corresponding induction principles. Classes can be used to express various effects, including: non-expansiveness of terms; read/write effects and various forms of interference.
Binding structures enrich traditional abstract data types by providing support for representing binding mechanisms and schematic presentations. They are intended to facilitate the representation and manipulation of symbolic structures such as programs, logical formulae, derivations, specifications and modules. The notion of binding structure generalizes de Bruijn notation, providing for generalized binding operators, named free variables, nameless bound variables, and syntactic contexts (expressions with holes or meta variables standing for substructures that have been removed, or not yet filled in) in a single framework. Binding structures capture the essence of higher-order abstract syntax, while presenting a first-order, structural/intensional view of binding. In this paper we present a theory of binding structures, and give examples of its application to rewriting. We define the set of binding structures as an abstract algebra, and define a general notion of parameterized homomorphism. A variety of operations on binding structures are presented as homomorphisms, and a collection of properties useful for developing applications is given. Three applications are presented: a generalized notion of term rewriting; a theory of unification for binding structures; and a set of structures and primitive rules intended to serve as a basis for design of rewriting components for (quantified) first-order reasoning systems.
The theory presented here, IOCC (Impredicative theory of Operations, Control, and Classes) is an essentially first-order, two-layered theory. The lower layer is a theory of program equivalence and definedness. Program primitives that can be treated within this theory include functional application and abstraction, conditional, numbers, pairing, and continuation capture and resumption. The upper layer is a theory of class membership with a general comprehension principle for defining classes. Many standard class constructions and data-types can be represented naturally including algebraic abstract data types, list-, record-, and function- type constructions. In addition classes can be constructed satisfying systems of equations such as [S = A x G; G = B -> S] which are useful in treating programming concepts such as streams and object behaviors. Coroutines can also be classified within this theory. Examples are given illustrating the derivation of programs using escape mechanisms, and methods for reasoning about streams and coroutines. Methods are given for constructing models of IOCC from semantic models of the underlying programming language.
Keywords: IOCC, functional, control abstraction, program equivalence, specification, transformation, streams, co-routines, induction principles
In this paper we present preliminary results of a rigorous development of the actor model of computation. We present an actor language which is an extension of a simple functional language, and provide a precise operational semantics for this extension. Our actor systems are open distributed systems, meaning we explicitly take into account the interface with external components in the specification of an actor system. We define and study various notions of equivalence on actor expressions and systems. We show that the usual tripartite family of testing equivalence relations collapses to two in the presence of fairness. We define a notion of operational bisimulation as a tool for establishing equivalence under fairness assumptions, and illustrate its use.
In the well known paper Meyer and Sieber give a series of examples of programs that are operationally equivalent (according to the intended semantics of block-structured Algol-like programs) but which are not given equivalent denotations in traditional denotational semantics. Since numerous proof systems for Algol are sound for the denotational models in question, these equivalences, if expressible, must be independent of these systems. In this paper we accomplish two goals. Firstly, we present the first-order part of a new logic for reasoning about programs. Secondly, we use this logic to prove the equivalence of the Meyer-Sieber examples. Our style of operational semantics naturally provides for the symbolic evaluation of contexts. This allows us to define the semantics of \it contextual assertions. Contextual assertions allows us to express that a property holds at the point in the program text where the hole appears. Contextual assertions generalize Hoare’s triples in that they can be nested, used as assumptions, and their free variables may be quantified. Our atomic formulas express the (operational or observational) equivalence of programs. Neither Hoare’s logic nor Dynamic logic incorporate this ability, or make use of such equivalences (for example by replacing one piece of program text by an equivalent one). In this paper we only describe the first order part of the logic. The full logic, a variable type theory of individuals and classes, is based on the systems of Feferman \citefeferman-explicit-75,feferman-poly-90 and Talcott \citetalcott-90disco.
In this paper we study the constrained equivalence of programs with effects. In particular, we present a formal system for deriving such equivalences. The formal system we present defines a single-conclusion consequence relation between finite sets of constraints and assertions. Although the formal system is computationally adequate, even for expressions containing recursively defined functions, it is inadequate for proving properties of recursively defined functions. We show how to enrich the formal system by addition of induction principles. To illustrate the use of the formal system, we give three non-trivial examples of constrained equivalence assertions of well known list-processing programs. We also establish several metatheoretic properties of constrained equivalence and the formal system, including soundness, completeness, and a comparison of the equivalence relations on various fragments.
We examine the notion of logical system \citemeseguer-89lc as a possible framework for specifying the semantic behavior of components of automated reasoning systems and for describing sound interconnections between these components. To make the ideas concrete, we analyze aspects of a variety of existing systems — logics, interfaces, and mechanisms for combination. We describe a notion of reasoning structure to support interaction between reasoning modules.
In this paper we describe progress towards a theory of program development by systematic refinement beginning with a clean, functional program thought of as a specification. Transformations include reuse of storage, and re-representation of abstract data. The transformation rules are based on a theory of constrained equivalence for functional languages with imperative features (i.e. Lisp/Scheme/ML). Constrained equivalence is a relation between sets of constraints (on computation contexts) and pairs of expressions. The interpretation is that in all contexts satisfying the constraints, evaluation of the expressions is either undefined or produces the same results and has the same effect on memory (modulo garbage). Constrained equivalence naturally allows reasoning by cases and permits use of a variety of induction principles. An inference system for constrained equivalence that is complete for expressions not involving recursively defined functions has been developed by the authors. One difficulty with constrained equivalence is that it is not a congruence. One cannot, in general, place expressions equivalent under some non-empty set of constraints into an arbitrary program context and preserve equivalence. For this and other reasons, we have developed a formal system for propagating constraints into program contexts. In this system, it is possible to place expressions equivalent under some non-empty set of constraints into a program context and preserve equivalence provided that the constraints propagate into that context. Constrained equivalence and constraint propagation provide a basis for systematic development of program transformation rules. In this paper we present three main rules: subgoal induction, recursion induction, and the peephole rule. Use of these rules is illustrated by the transformation of a simple, but non-trivial, rewriting program, known as the Boyer Benchmark.
In this paper we report progress in development of methods for reasoning about the equivalence of objects with memory and the use of these methods to describe sound operations on such objects, in terms of formal program transformations. We also formalize three different aspects of objects: their specification, their behavior, and their canonical representative. Formal connections among these aspects provide methods for optimization and reasoning about systems of objects. To illustrate these ideas we give a formal derivation of an optimized specialized window editor from generic specifications of its components. A new result in this paper enables one to make use of symbolic evaluation (with respect to a set of constraints) to establish the equivalence of objects. This form of evaluation is not only mechanizable, it is also generalizes the conditions under which partial evaluation usually takes place.
Binding structures enrich traditional abstract data types by providing support for representing binding mechanisms and structures with holes. They are intended to facilitate the representation and manipulation of symbolic structures such as programs, logical formulae, rules, derivations, specifications, and modules.
Traditionally the view has been that direct expression of control and store mechanisms and clear mathematical semantics are incompatible requirements. This paper shows that adding objects with memory to the call-by-value lambda calculus results in a language with a rich equational theory, satisfying many of the usual laws. Combined with other recent work this provides evidence that expressive, mathematically clean programming languages are indeed possible.
In this paper we give several examples that illustrate our techniques for reasoning about programs with effects. One technique is to define correspondences between effect-free programs and programs with effect and transfer results about effect-free programs to corresponding programs with effects. We give two examples of this technique. The first involves the correspondence between loops and tail-recursive programs, replacing assignment to loop variables by \qlet binding. The second example involves a correspondence between mutable objects and streams, replacing updating by explicitly passing the store. The correspondences are left informal here, but in practice translators can be defined to insure correct correspondences (and avoid the work of hand translation). A second technique is to lift principles for reasoning about effect-free programs to those with effects, typically by finding appropriate abstractions and restrictions. We present three examples here: the use of structure copying to insure non-interference among operations; the use of structural induction principles for establishing program equivalence; and the use of abstract objects to encapsulate effects and potential interference in a controlled way.
In this paper we report progress in development of methods for reasoning about the equivalence of objects with memory and the use of these methods to describe sound operations on such objects, in terms of formal program transformations. We also formalize three different aspects of objects: their specification, their behavior, and their canonical representative. Formal connections among these aspects provide methods for optimization and reasoning about systems of objects. To illustrate these ideas we give a formal derivation of an optimized specialized window editor from generic specifications of its components. A new result in this paper enables one to make use of symbolic evaluation (with respect to a set of constraints) to establish the equivalence of objects. This form of evaluation is not only mechanizable, it is also generalizes the conditions under which partial evaluation usually takes place.
In this paper we present a formal system for deriving assertions about programs with memory. The assertions we consider are of the following three forms: (i) an expression diverges (i.e. fails to reduce to a value); (ii) two expressions reduce to the same value and have exactly the same effect on memory; and (iii) two expressions reduce to the same value and have the same effect on memory up to production of garbage (are strongly isomorphic). The expressions considered are expressions of a first-order Scheme- or Lisp-like language over a given set of atoms with the data operations ‘atom’ ‘eq’ ‘car’, ‘cdr’, ‘cons’, ‘setcar’, ‘setcdr’, the control primitives ‘let’ and ‘if’, and recursive definition of function symbols. The formal system we present defines a single-conclusion consequence relation between finite sets of constraints and assertions. A constraint is an atomic or negated atomic formula in the first-order language consisting of equality, the unary function symbols ‘car’ and ‘cdr’, the unary relation ‘atom’, and constants from the set of atoms. Constraints have the natural first-order interpretation. The semantics of the formal system is given by a semantic consequence relation which is defined in terms of a class of memory models for assertions and constraints. The main results of this paper are: (Soundness) The deduction system is sound — if the formal proves that an assertion is a consequence of some set of constraints then the assertion is a semantic consequence of that set of constraints. (Completeness) The deduction system is complete for assertions not containing recursively defined function symbols — if such an assertion is a semantic consequence of a set of constraints then it is derivable in the formal system.
Rum is an intensional semantic theory of function and control abstractions as computation primitives. It is a mathematical foundation for understanding and improving current practice in symbolic (Lisp-style) computation. The theory provides, in a single context, a variety of semantics ranging from structures and rules for carrying out computations to an interpretation as functions on the computation domain. Properties of powerful programming tools such as functions as values, streams, aspects of object oriented programming, escape mechanisms, and coroutines can be represented naturally. In addition a wide variety of operations on programs can be treated including program transformations which introduce function and control abstractions, compiling morphisms that transform control abstractions into function abstractions, and operations that transform intensional properties of programs into extensional properties. The theory goes beyond a theory of functions computed by programs, providing tools for treating both intensional and extensional properties of programs. This provides operations on programs with meanings to transform as well as meanings to preserve. Applications of this theory include expressing and proving properties of particular programs and of classes of programs and studying mathematical properties of computation mechanisms. Additional applications are the design and implementation of interactive computation systems and the mechanization of reasoning about computation. These notes are based on lectures given at the Western Institute of Computer Science summer program, 31 July – 1 August 1986. Here we focus on programing and proving with function and control abstractions and present a variety of example programs, properties, and techniques for proving these properties.
The point of this paper is to describe a challenging application of algebraic methods to programming language theory. Much work on the theory of Scheme–like languages (applicative, but not necessarily functional) has an essentially algebraic flavor [Talcott 1985] [Felleisen, Wand,et.al. 1988]. Thus it seems appropriate to make the algebraic aspect explicit. This would allow us to take advantage of the work in algebraic methods to extend and generalize existing work and to facilitate application of the results. Full support of this application of algebraic methods will require bringing diverse results together in a single enriched framework.
In this paper we take a look at partial evaluation from the point of view of symbolic computation systems, point out some challenging new applications for partial evaluation in such systems, and outline some criteria for a theory of partial evaluation. The key features of symbolic computation systems are summarized along with work on semantics of such systems which will hopefully aid in meeting the challenges. The new applications are illustrated by an example using on the concept of component configuration. This is a new idea for software development, based on the use of higer–order and reflective computation mechanisms, that generalizes such ideas as modules, classes, and programming in the large.
Keywords: components, configuration, partial evaluation, function abstraction, control abstraction, process abstraction, objects with memory
RUM is an intensional semantic theory of function and control abstractions as computation primitives. It provides a variety of semantics in a single context and a mathematical foundation for understanding and improving current practice in symbolic (Lisp-style) computation. Properties of powerful programming tools such as functions as values, streams, object oriented programming, escape mechanisms, and co-routines can be represented naturally. In addition a wide variety of operations on programs can be treated including program transformations which introduce function and control abstractions, compiling morphisms that tranform control abstractions into function abstractions, and operations that transform intensional properties of programs into extensional properties. The theory goes beyond a theory of functions computed by programs, providing tools for treating both intensional and extensional properties of programs. This provides operations on programs with meanings to transform as well as meanings to preserve. Applications of RUM include expressing and proving properties of particular programs and of classes of programs; studying mathematical properties of computation mechanisms; the design and implementation of interactive computation systems; and mechanization of reasoning about computation. In this paper we briefly survey related ideas from work in symbolic computation and semantics and give a guided tour through RUM illustrating the main ideas.
Rum is a theory of applicative, side-effect free computations over an algebraic data structure. It goes beyond a theory of functions computed by programs, treating both intensional and extensional aspects of computation. Powerful programming tools such as %functions as values, streams, object-oriented programming, escape mechanisms, and co-routines can be represented. Intensional properties include the number of multiplications executed, the number of context switches, and the maximum stack depth required in a computation. Extensional properties include notions of equality for streams and co-routines and characterization of functionals implementing strategies for searching tree-structured spaces. Precise definitions of informal concepts such as stream and co-routine are given and their mathematical theory is developed. Operations on programs treated include program transformations which introduce functional and control abstractions; a compiling morphism that provides a representation of control abstractions as functional abstractions; and operations that transform intensional properties to extensional properties. The goal is not only to account for programming practice in Lisp, but also to improve practice by providing mathematical tools for developing programs and building programming systems. Rum views computation as a process of generating computation structures – trees for context-independent computations and sequences for context-dependent computations. The recursion theorem gives a fixed-point function that computes computationally minimal fixed points. The context insensitivity theorem says that context-dependent computations are uniformly parameterized by the calling context and that computations in which context dependence is localized can be treated like context-independent computations. Rum machine structure and morphism are introduced to define and prove properties of compilers. The hierarchy of comparison relations on programs ranges from intensional equality to maximum approximation and equivalence relations that are extensional. The fixed-point function computes the least fixed point with respect to the maximum approximation. Comparison relations, combined with the interpretation of programs using computation structures, provide operations on programs both with meanings to preserve and meanings to transform.